Donors don’t care much about impact! (say the data)

This article first published in Third Sector.

There has been a huge rise in interest recently in the impact charities have, so it’s remarkable that only now are we seeing rigorous evidence emerging about whether donors actually care. It’s a mixed picture.

A paper published last year reported on an experiment with a US charity, Freedom From Hunger. It divided its donor list into two random groups. Those in one group received a conventional solicitation with an emotional appeal and a personal story of a beneficiary, with a final paragraph suggesting that FFH had helped that beneficiary. Those in the other group received a letter identical in all respects – except that the final paragraph stated (truthfully) that “rigorous scientific methodologies” had shown the positive impact of FFH’s work. 

Donations were barely affected. The mention or omission of scientific rigour had no effect at all on whether someone donated. It also had only a tiny effect on the total amount raised. People who had supported that charity infrequently were not swayed. However, people who had previously given ‘a lot’ – more than $100 – were prompted by the material on effectiveness to increase their gifts by an average of $12.98 more than those in the control group. On the downside, people who had previously made frequent gifts of less than $100 became less likely to give and also shrank their average gifts by $0.81 – all told, the net effect was about nil. But on the upside, this implies that more serious donors will give more if they are presented with decent evidence of effectiveness.

A separate study in Kentucky looked at whether donors give more when there is an independent assessment of the charity’s quality. Donors were each approached about one charity from a list; each charity had been given a three or four-star rating (out of four) by the information company Charity Navigator. Half the donors were shown the rating; the other half weren’t. The presence of the ratings made no meaningful difference to their responses.

The third study has not yet been published, but is perhaps the most telling. It was a multi-arm, randomised, controlled test in which a large number of US donors each received appeals from one charity out of a set of charities that had various Charity Navigator ratings. Half of the appeals included the charity’s rating; the other half did not.

The overall effect of presenting the information was to reduce donations. Showing the ratings brought no more benefit to the high-rated charities than not showing them. For charities with a rating of less than four stars, showing the rating reduced donations; and the lower the rating, the more it reduced donations.

Donors appeared to use evidence of effectiveness as they would a hygiene factor: they seemed to expect all charities to have four-star ratings, and reduced donations when they were disappointed – but never increased them because they were never positively surprised.

Three swallows don’t make a summer, of course, so there’s much more to know about donor behaviour. Even if it transpires that donors really don’t care, our constituents do – hence, so must we.

Contribute to work with the University of Chicago to better understand donor behaviour —>

Posted in Donor behaviour & giving stats, Fundraising, Impact & evaluation | 1 Comment

Enabling Make Better Decisions: Meta-Research to the Rescue!

This article was first published by our friends at The Life You Can Save.

It’s hard to make evidence-based decisions if much of the evidence is missing, ropey, unclear, or you can’t find it. This has become Giving Evidence’s unofficial slogan as we aim for charitable giving to be based on sound evidence.

Charities produce masses of evidence about their effectiveness. Evidence is a key component of how organisations like The Life You Can Save, GiveWell, and foundations assess charities. But much of that research is missing (unpublished), ropey (uses poor research methods), unclear (so you can’t tell whether it’s ropey or not), or is hard to find because it’s only published on the website of an organisation you’ve never heard of. (There are virtually no central indexed repositories.) 

This damages beneficiaries in two ways: first, donors and other operational charities can’t see what works and therefore what to fund or replicate, so may implement something avoidably suboptimal; and second, the research consumes resources which could perhaps be better spent on delivering something which does work. Hence Giving Evidence works to increase quality, quantity and availability of research.

Giving Evidence is just now starting to study missing (non-published) research by charities: our new project on this topic is, to our knowledge, the first ever study of whether and why charities’ research is unpublished. We already know that much charity research is unpublished. When I was a charity CEO, we researched our impact, and when the results were good, we published them, and when they weren’t we didn’t. I’d never heard of publication bias (of which this is an egregious example) but I had noticed that bad results make for bad meetings with funders…which led to us having to cut staff. In our defense, we weren’t being evil. We were just responding rationally to badly-designed incentives. Fewer than half the US foundations surveyed who conduct evaluations publish them. We also know that non-publication of research is a major problem in science and in medical research.

We suspect three reasons why charities don’t publish their research. First, incentives, as outlined. Second, they may think that nobody’s interested. By analogy, a campaign in the UK to get grant-makers to publish details of all their grants (which few do) found that many foundations were open to doing this but simply hadn’t realised that anybody would want them. And third, it’s unclear where to publish even if you want to. There are few repositories, journals, or standard ways of ‘tagging’ research online to make it findable. So charities may (rightly) think that the traffic to material published exclusively on their own websites won’t justify the work in sprucing up the research to publish it.

This first study focuses on UK charities supporting people with mental health issues: what research do they do, what do they not publish, and why not. The aim is to figure out what could be done to get more of it published – and by whom.  We’re interested in, for example:

  • How much research is unpublished? We’ll try to estimate the proportion of the research budget whose fruits are never publicly available.
  • Is published research consistently more positive than non-published research? That would suggest the incentive problem to which I personally fell prey.
  • Does the chance of publication depend on whether the research is done in-house versus by an outsider? Or depend on who pays for it? Possibly some funders prevent charities from publishing research.

On research by charities being hard to find and unclear, Giving Evidence is working with charities in criminal justice. We’re creating a standardised, structured abstract to sit atop any research report by charities (detailing, for example, what the intervention was, what kinds of people were served and where, what the research was (sample size, how they were selected), what outcomes were measured, the results, the unit cost). This borrows heavily from the checklists for reporting medical research which are thought to have markedly improved the usefulness and quality of medical research. We’re also looking at creating, not a central repository as such, but open meta-data to allow charities to tag their research online and a central search ‘bot’ (rather like this) through which donors, charities, practitioners, and policy-makers can rapidly find it.

And on charities research being ropey, we’re working with a foundation to assess the quality of research that their grantees are producing – requested by that foundation and requested by other funders and initiated by the charity themselves. Quality of charities’ research has also barely been researched. We know of just one study, by a UK foundation which found that about 70% of research it received from grantees was what it called ‘good’ and some appeared to be totally fabricated.

Medicine has made great strides by enabling front-line practitioners to make decisions based on sound evidence – since in their world, like ours, the right best course of action isn’t always evident. Hence they devote considerable resource to figuring out how much research is ropey, and why, and fixing it. They have whole teams devoted to improving research reporting, to make it clearer. Other teams look at ‘information infrastructure’ to ensure that evidence can be rapidly found; and many people study non-publication and selective publication of clinical research and work on rooting it out. Thus meta-research – research about research – is essential to improving decisions. Far from just technical and dry, good meta-research can help improve real beneficiaries’ lives.

It’s thought that fully 85% of all medical research is wasted – on research which goes ‘missing’, is too ropey, unclear or unfindable. We need to know if and where charitable resources are being similarly wasted, and make haste to fix it. Giving Evidence’s meta-research and work on the information infrastructure are, we think, important steps.

We’ll report back later on what we find.

The issues and our work:

Posted in Uncategorized | Leave a comment

Charities should do much less evaluation

Stand over there, would you, while I throw this wellington boot. I want you to see how well I throw it. Pay attention: you need to judge me on my welly-throwing. Oops, that throw wasn’t very good! Let’s not count that. Ah, the second throw was better. OK, now my assistant will measure how far it went. No – him, not you. It’s actually quite hard to measure it properly – the tape has to be taut, so I have to secure it in the ground here – and I’ve not learnt to do that properly. Anyway, a bit of slack is all to the good! We’ll use this tape-measure which we made: it uses a special unit of distance which we invented.

****

This, I suspect, is uncomfortably close to how charities’ monitoring and evaluation work. Charities get judged on ‘evaluations’ which they themselves produce, for which they design measures, and they decide whether and what to publish. It appears not to help them much. If the aim is to improve decisions – by operating charities, by funders, by policy-makers – by enabling access to reliable evidence about what’s worth doing and what’s worth prioritising, then much of it fails: it’s just too ropey.

This article first appeared in Third Sector. A pdf of it is here

This needs to stop. It wastes time and money, and – possibly worse – pulls people towards bad decisions. My aim here isn’t to just bitch, but rather to honestly present some evidence about how monitoring and evaluation actually works currently, and make some suggestions about creating a better set-up.

Why are we evaluating?

When asked in 2012 what prompted their impact measurement efforts, 52% of UK charities and social enterprises talked about funders’ requirements. Despite being social-purpose organisations, the number which cited ‘wanting to improve our services’ was a paltry 7%[i].

A study by two American universities indicates the incentives which influence charities’ evaluations. In a randomised controlled trial, the universities contacted 1,419 micro-finance institutions (MFIs) offering to rigorously evaluate their work. (It was a genuine offer.) Half of the invitations referenced a (real) study by prominent researchers indicating that microcredit is effective. The other half of the invitations referenced another real study, by the same researchers using a similar design, which indicated that microcredit has no effect.

The letters suggesting that microfinance works got twice as many positive responses as those which suggested that it doesn’t work.[ii] Of course. The MFIs are selling. They’re doing evaluations in order to bolster their case. To donors.

Hence it’s little surprise if evaluations which don’t flatter aren’t published. I myself withheld unflattering research when I ran a charity (discussed here). Withholding and publication bias are probably widespread in the voluntary sector – Giving Evidence is starting what we believe to be the first ever study of them – preventing evidence-based decisions, and wasting money.

Bad method

If charities are wanting (or forced, by the incentives set up for them) to do evaluations which flatter them, they’re likely to choose bad research methods. Consider a survey. If you survey 50 random people, you’ll probably hear representative views. But if you choose which 50 to ask, you could choose only the cheery people. Furthermore, bad research is cheaper: surveying five people is cheaper than surveying a more statistically significant 200. A charity in criminal justice told me recently of a grant from a UK foundation “of which half was for evaluation. That was £5,000. I said to them that that’s ridiculous, and kind of unfair. We obviously can’t do decent research with that.

Charities’ research is often poor quality. The Paul Hamlyn Foundation, assessed the quality of research reports it received from its grantees over several years, and graded them: good, ok, not ok. The scale it used was much more generous than how medics grade research. Even so, 70% was not good.  Another example is the Arts Alliance’s library of evidence by charities using the arts in criminal justice. About two years ago, it had 86 studies. When the government offender management service look at that evidence for a review which had minimum quality standard, how many of those studies could it use? Four. The new ‘what works centre’ for crime reduction found much the same. It searched for all systematic reviews about crime reduction [systematic reviews compile all findable evidence above a stated quality threshold] and found 337. Giving Evidence asked about the contribution of research by charities to them, and they said it was ‘very small’. One charity CEO we interviewed recently blurted it right out: 

When I first started in this [sector], I kept talking about evaluation and he [senior person in the charity sector] said to me ‘don’t worry about that. You can just make it up. Everybody else does. At the very least you should exaggerate a lot. You’ll have to, to get funded.

“Ask an important question and answer it reliably”

This is a central tenet of clinical research. Though it sounds obvious, it isn’t what happens in our sector. On reliability, much research by charities fails as discussed. It’s inevitable because investigating causal links is hard. Most charities don’t have those skills. Given the fragmentation (the UK has 1475 NGOs in criminal justice alone) you wouldn’t want them all to hire a researcher.

And on importance, charities’ research often seems to fall short there too. 65% of CEOs of US foundations say that generating meaningful insights from evaluations is ‘a challenge’[iii].

The collective spend on evaluation in the US is 2% of total grant-making[iv]. That proportion of UK grants would be £92m. That’s easily enough for many pieces of reliable research, but split between loads of organisations and into pieces of £5000, it can only generate garbage. It’s as though we’re mountaineering, and everybody gets into the foothills but nobody reaches the summit. Everybody tickles the question but no-one nails it.

It’s wasteful and it should stop.

We need one other thing too. Almost all decisions are between options: this intervention versus that one, for example. To enable evidence-based decisions, evaluations must enable comparisons. So it’s no good if everybody designs their own tape-measure[1]: a survey of 120 UK charities and social enterprises found over 130 measurement tools in-play[v]. We need standardised metrics. These needn’t be some impossible universal measure of human happiness, but could be standardised within specialisms such as some types of mental health care, or job creation or back-to-work programmes.

Cite evidence, don’t produce it

When I get in an aeroplane, I do not wish my flight to be in a rigorous trial to conclusively prove whether the plane will stay up or not: I want to know that that’s been established already. If an intervention is innovative – say I’m having a new medical drug – then obviously it won’t yet have been fully evaluated, but it’s reasonable to ask that the practitioner can cite some evidence that this intervention isn’t bonkers: maybe it’s a variation on a known drug, or other research suggests a plausible causal mechanism.

We should do more of this in our sector. We should expect organisations to cite research which supports their theory of change; but we don’t need every single organisation to produce research.

Imagine that you’re considering starting a breakfast club in a school. Should you do an evaluation? The table below explains.

Answer: no! The first thing you do is look at the literature to see what’s already known about whether they work. To be fair, ‘the literature’ is currently disorganised, unclear and tough to navigate (hence Giving Evidence is working on that – more detail soon), but ideally you’d look at research by other charities and academics and others.

If that research is reliable and shows that the clubs don’t work, then obviously you stop.

If that research is reliable and shows that clubs do work, then just crack on. The evaluation has already been done and you don’t need to duplicate it: by analogy, we don’t expect every hospital to be a test site.  You can just cite that evidence, and monitor your results to check that they’re in-line with what the trials predict. (If not, that suggests a problem in implementation, which a process evaluation can explore.)

This of course is different to what happens now: in the model I’m suggesting, in the circumstances described, you will never have a rigorous evaluation of your breakfast club. Just as most cancer patients will never be in a rigorous trial, and you never want to be in one by an airline. But you will (a) have a sound basis for believing that your club improves learning outcomes (in fact, a much better basis than if you’d attempted an evaluation, like our friend earlier, with just £5000) and (b) won’t have spent any time or money on evaluation. Of course, this model requires funders, commissioners, trustees and others to sign up to the ‘cite, don’t necessarily produce’ model of research, which I realise isn’t trivial. They too would look for evidence before they fund, rather than looking just at the ‘monitoring and evaluation’ which emerge afterwards. The Children’s Investment Fund Foundation, for example, reviews the literature relevant to any application it’s considering.

Under this model, many fewer evaluations happen. Those few can be better.

If your literature search finds no evidence because it’s a novel idea, then look at relevant literature (see column on page x), run a pilot, as described, and if it works, eventually decide whether to do a ‘proper’ evaluation.

Then arise the questions of who does that evaluation and who pays for it. I don’t have all the answers (and am interested in your ideas), but a few ‘points of light’ are clear.

First, the evaluation shouldn’t funded by the charity. It’s a public good other people will use it too, so it’s unfair to ‘tax’ the first-mover by making them fund it. In international development, many institutions want to use reliable evaluations but few are willing to pay, so many of them are funded centrally as a public good, through the International Initiative on Impact Evaluation (3ie), essentially a pooled fund from the Gates Foundation, Hewlett Foundation and you, the UK tax-payer, through DFID. [As an aside, almost every sophisticated thing in international development has DFID involvement somewhere.]

Second, the budget for the evaluation has nothing to do with the size of the grant. If the question is important, it should be answered reliably even if that’s expensive. If adequate budget isn’t available, don’t evaluate at all: a bad answer can be worse than no answer and is just wasteful. Let’s not tickle questions.

Third, the evaluation shouldn’t be conducted by the charity – for the reasons of skill and incentives we’ve seen. The obvious answer is academics, but sadly their incentives aren’t always aligned with ours: their funding and status rest on high-profile journal articles, so (a) they might not be interested in the question and (b) their ‘product’ can be impenetrably theoretical (and may be paywalled). Several people in the last month have suggested that young researchers – PhD students and post-docs – with suitable skills may be the answer: some system to broker them in to charities whose work (genuinely) needs evaluating, rather as Project Oracle is doing with some charities in London.

Does evaluation preclude innovation?

No. You can tell that it doesn’t because the model in which charities cite research, but don’t always produce research, is essentially what happens in medicine where there’s masses of innovation. In fact, reliable evaluation is essential to innovation because reliable
evaluations show which innovations are worth keeping. They also show what’s likely to work. Few things are totally new. Most build on something already known. Suppose that you have a new programme for countering workplace gender discrimination. It relies on magic fairies visiting people at night. Well that’s interesting, because there’s no evidence of magic fairies in the whole history of time. Thus there’s no evidence to support the notion that this programme will work.

By contrast, suppose that your programme assumes that they will follow the crowd, shy away from complicated decisions and are weirdly interested in hanging on to things they
already own. Those three traits of human behaviour are very well-established – Daniel Kahneman was awarded a Nobel prize for proving the latter and substantial evidence for them all is in his book, Thinking, Fast and Slow.

At the outset, you won’t have any evaluations of your particular programme, but you can cite evidence that it’s not bonkers. We’re not talking here about proof, clearly, but rather
about empirically-driven reasons to believe. What gives you to think that it’ll work? What else is similar which works elsewhere? What assumptions does the programme make about human behaviour or organisations or political systems and what evidence supports those assumptions?

Hence the “cite research, don’t necessarily produce research” model reduces the risk of funding or implementing innovations which fail, and thereby wasting time, money and
opportunity. It allows us to stand on the findings of many generations and disciplines and, hence, see better whether our innovation might work. We might call this “evidence-based
innovation”.

On our guard
If there is no evidence, that doesn’t prove that the programme won’t work – but it should put us on our guard. The Dutch have a great phrase: “comply or explain”. If your innovative idea doesn’t comply with the existing evidence, then you have more explaining to do than if it does.

For example, to improve exam results, various economists handed schoolchildren a $20 note at the exam hall door. It sounds crazy. The students were told to hand the money back if they didn’t do well. Now, suddenly, it sounds sensible. This innovation is informed by Kahneman’s finding that people will work hard to retain something they already own – harder than they would work to gain that thing in the first place.
Context is, of course, important. Perhaps the evidence came from a time or place that is materially different and hence doesn’t apply – or, at least, requires a bit of translation to here and now. Hence, innovations might be evidence-informed, rather than proven.

And once your new gender programme is running, we need to see whether it really works – not just whether it looks as if it’s working. For that, we need rigorous evaluations.

___________

What’s evaluation, what isn’t and what to do when?

“Evaluation is distinguished from monitoring by a serious attempt to establish causation”, says Michael Kell, chief economist at the National Audit Office.

That research is not needed all the time. For service delivery, the types of research which are useful at various stages of a programme’s development are as follows, taking the example of a school breakfast club:

Stage of programme development Purpose of the stage, and useful information to gather Application to breakfast club
Pilot Establish if the programme is feasible, if there is demand, the resource requirements (time, people, cost), management challenges and costs.Type of research: monitoring. How much cereal is needed, do children and parents want it, how many staff and time are needed to wash up, how much it all costs?
Test Now that the programme is stable and manageable, investigate whether the inputs cause the intended outcomes.Type of research: evaluation, ideally rigorous e.g., with an equivalent control group, and conducted and funded independently. Most programmes need several evaluations, in diverse circumstances. (How) does a breakfast club improve learning outcomes?
Scale-up/Delivering Services Now the programme is known to be effective and can be scaled up. We don’t need to evaluate it again, so can just monitor it to ensure that it’s working as expected.Type of research: monitoring. Are the changes in learning outcomes in-line with results from the trials? If not, something may be awry in implementation.Monitor beneficiary views, uptake, measurable results (e.g. test scores), and cost.

Monitoring and evaluation of research and development work, and of advocacy both work rather differently.

This table does not look at process evaluation, which is separate (and highly useful). That aims to understand whether the intervention was actually delivered as planned; variations in cost, quality, staffing etc.; and to identify operational improvements.

How to improve charities’ research–>

___________

[1] This remains a terrible problem in medical research. For example, a study of 2000 studies of schizophrenia found 640 different measurement instruments, of which 369 were used only once.

[i] Making an Impact: Impact Measurement Across Charities and Social Enterprises in the UK, NPC, October 2012

[ii] [i] Findlay, M. Aversion to Learning in Development? A Global Field Experiment on Microfinance Institutions. [Online] http://www.michael-findley.com/uploads/2/0/4/5/20455799/mfi_learning.22mar13.pdf [Accessed on: 24.09.14]

[iii] http://www.effectivephilanthropy.org/portfolio-items/the-state-of-foundation-performance-assessment/ page 8

[iv] http://www.effectivephilanthropy.org/portfolio-items/the-state-of-foundation-performance-assessment/ page 8

[v] http://inspiringimpact.org/2012/10/24/measuring-the-market/

Posted in Uncategorized | 1 Comment

How do you make people give more? Research in the US has some surprising messages

In the US, individual charitable giving is much vaunted, but it’s flat. Once you adjust for inflation, it’s been between 2 per cent and 2.2 per cent of income for more than 30 years. Identifying how to increase giving is the focus of research by the University of Chicago’s economics department; and since it has more Nobel Prizes than any other, it’s worth listening to. I went to hear its findings, and these are some snippets. 

Being near the finish line helps. The Center for Environmental Policy Analysis at the University of Central Florida asked 3,000 Florida households for funds for computers. The letters said, variously, that Cepa already had 10 per cent, 33 per cent or 67 per cent of the amount needed. The effect was huge – the “67 per cent” letters raised six times more funds than the “10 per cent” ones, and the former received more than twice as many responses (8.2 per cent) as the latter (3.7 per cent).

Words are important. In one experiment, people could buy doughnuts from what was obviously a charity fundraising stall and could choose the amounts they paid. When the transaction was framed as a payment, the average exchange was $1.60; but when framed as a donation, it went up by nearly a third to $2.10. Some gave the latter without taking the doughnut at all.

In an experiment carried out by the Public Broadcasting Service, a non-profit US TV broadcasting network, some people were offered the gift of a PBS-branded pen. Perhaps surprisingly, people gave more when nothing was offered. The same happened with a solicitation for Save the Children: some people received a normal ask, whereas others were told they would be entered in a prize lottery. The average gift from the lottery group was $26; from the non-lottery group it was $32.

The importance of words comes up again in an experiment in which people were offered an item and asked to pay either what they wanted for it, or what they could. The “pay what you want” message produced an average exchange of 64 cents, whereas “pay what you can” gathered more than a quarter more, with an average of 82 cents.

This doesn’t necessarily make for loyal donors. In an experiment at Yale, students earned money for doing a dull task that they could either keep or donate to one of several charities. Of those who donated, only 55 per cent could recall afterwards the name of the charity they chose.

A few things are striking about the broad set of experiments from which these examples Randomize_That_Stamp_3are drawn. First, none of them looks at whether giving increases overall. Second, they’re all randomised controlled trials. That method still seems to be controversial in relation to programmes, yet it’s well entrenched on the fundraising side. Third, they focus only on individual donors. Major donors, foundations and corporations might behave differently, and studies are brewing to investigate that. Fourth, the examples are all from the US, so results may or may not apply here.

Lastly, of course, impact doesn’t necessarily follow inputs. It might be quicker and easier to get a donor to give to an organisation that achieves twice as much per dollar than to get them to give twice as much. We’re on the case there, too.

This article first published in Third Sector.

How can donors know if a programme is working?—>

Posted in Donor behaviour & giving stats, Fundraising | Leave a comment

Charities should do fewer evaluations; those few can be better

It’s hard to make evidence based decisions if much of the evidence is missing, garbage, unclear, or you can’t find it. Talk given in Barcelona (18 mins)

More examples of important evidence being missing or garbage–>

What Giving Evidence is doing to make charities’ evidence clearer and easier to find--> 

Posted in Uncategorized | Leave a comment

Give Your Best, this Giving Tuesday

Caroline Fiennes explains how to maximize the effect of your donation, even if you have no money at all. [This article was first published by GivingTuesday.]

The basics which you must know about charities before you start

Some charities are miles better than others 

This sounds rather heretical because we often think that all charities are good. But we also think that teaching is good, and so is providing medical care, and yet we know that some teachers are better than others, some doctors, some treatments. It’s the same with charities, so your choice matters.

For example, in Kenya, where diarrhoea from dirty water is a major problem, delivering chlorine to households can prevent diarrhea for a certain cost, but giving people chlorine at the village water source achieves the same result for less than half that cost.

Similarly in North India, free village clinics are pretty good for getting children immunised. But if clinics offer mothers free lentils for every child immunised, immunisation rates increase more than six times.

And in Southern India – where I was once a teacher – children skip school a lot.  Giving the parents cash if their children show up (a respectable and widely used idea, called a ‘conditional cash transfer’) solves some of this, but giving out free school uniforms achieves ten times as much. And that’s peanuts compared to dealing with intestinal worms which many children there have: for the same price, ‘deworming’ can achieve 25 times as much.

Do the maths. Or cheat.  

The catch is that choosing wisely is hard, because charities rarely have these comparative data: a leaflet picturing a school child doesn’t indicate much about an organisation’s performance. So ignore the fundraising literature and look under the bonnet at what the charity actually does and whether it’s good at it. Making a difference depends on having a good ‘idea’ (strategy) and implementing it well. If you’re into formulae, think of it as: impact = idea x implementation.

In fact, since choosing wisely can be fiddly and laborious, find somebody smart and copy their homework! The charity world includes two types of people who’ve already done their homework in detail. First, are independent analysts. GiveWell analyses charities in great detail and only recommends about 1% of the charities it assesses. They’re all in international development, mainly public health. Charity Navigator* is much broader, publishing analysis on several thousand US-registered charities. Its ratings look at the charities’ performance on financial criteria, and transparency and accountability, and it’s adding information about their results. Global Giving is ‘an eBay’ for international development, and lists many grass-roots organisations which it has vetted.

Second, many charitable trusts and foundations employ people to analyse charities to decide which ones the trusts should fund. Some (but not all) of them are robust, and you’ll be pretty safe supporting charities which they back. You only have to find one whose interests match yours. If you’re interested in creating jobs in the US, look at F.B.Heron Foundation; if it’s poverty in New York City, look at the Robin Hood Foundation, for international development, look at Hewlett Foundation. A good signs is when a foundation publishes a sensible-looking strategy and criteria.

Don’t look at administration costs

People often think that low admin costs are a good sign. It turns out that they’re not. The costs which get shown in a charities accounts (and I wrote a whole book once about charities’ accounts, so I know!) include all kinds of useful things like systems to monitor results and evaluate what’s working and to make improvements. It’s more accurate to think of them as management costs: and so it’s rather unsurprising thatanalysis shows that charities with higher ‘admin’ costs tend to perform better.

But I don’t have any money 

Well then, rustle up money for charity from thin air! Try variants of these ideas:

Friends. Starfish, a charity which helps HIV/AIDS orphans in South Africa, is supported by young professionals in the West. They hosted dinner parties in their homes and got guests to donate to Starfish the money which they would have spent if the party had been in a restaurant: money which nobody had earmarked for charity.

Neighbours. Fred Mulder lives in London, UK, and was in a dispute with his neighbors over access to some land that he owns. Rather than all hire expensive lawyers to resolve it, Fred offered to give his neighbours perpetual access if they each (Fred included) donated £25,000 towards an educational charity in Zambia. This generated over £100,000 for charity and improved the neighbours’ relationship, which a legal fight never would have done.

Clients. Fred Mulder is full of these ideas. He’s an art dealer, and sometimes when negotiations with clients get become stuck, he suggests that the difference between his price and the offering price be donated to charity.

Bulk purchasing. A financial services company in a medium-sized British town includes various charities in its IT purchasing processes, so they benefit from the company’s volume discounts.

Hotel toiletries: Some business people who travel a lot give the complimentary toiletries from hotels to a domestic violence refuge. For people on the run from a violent partner, it’s nice if somebody’s provided some decent shampoo.

Things to give which aren’t money

Blood Find your nearest blood donation session at http://www.blood.co.uk/SessionSearcher/search.aspx

Bone marrow Some tissue types are more common in certain ethnic groups of the population, meaning that a patient normally needs a donor from a similar ethnic background to her own. There’s a particular need for stem cell donors from African, African-Caribbean, Asian, Chinese, Jewish, Eastern European and Mediterranean communities. You can register as a bone marrow or stem cell donor when you give blood or at http://www.nhsbt.nhs.uk/bonemarrow/

Business clothes Disadvantaged women trying to get back into work need business clothes – as well as training and confidence – for interviews and when they start work. Dress for Success works in nine countries, and has now helped over 550,000 women. http://www.dressforsuccess.org

Cars Several organisations will collect an unwanted car and turn it into money for charity through http://www.giveacar.co.uk.

Computer equipment Which? has a useful guide to recycling computers http://www.which.co.uk/environment-and-saving-energy/environment-and-greener-living/guides/recycling-computers/pc-recycling-tips/

Coupons and free stuff. You can donate the buy-one-get-one free items you don’t want. I know some business people who are constantly travelling and give the complimentary toiletries from hotels to a domestic violence refuge: if you’re on the run from a violent partner, it’s nice if somebody’s been thoughtful enough to provide some decent shampoo.

Cycles A number of nonprofit organisations refit unwanted bicycles to send to countries such as Haiti and South Africa, in the process training people in the UK to repair bikes. http://www.re-cycle.org and http://www.recyke-y-bike.org

Furniture The Salvation Army will take furniture to sell in its shops or pass on to homeless people settling in a new home. http://www.salvationarmy.org.uk

Gardens Landshare brings together people who want to grow their own food but have no place to do it and those who have land to share but lack time, experience or muscle-power. www.landshare.net

Glasses Visionaid Overseas organises a nationwide recycling scheme for old or unwanted spectacles. http://www.vao.org.uk

Musical instruments can go to school music programmes, senior citizens, talented young students, community groups, and charities can use them at events and as prizes to help raise money.

Paint Community RePaint schemes collect unwanted, surplus paint and re-distribute it to individuals, families and communities in need, improving the wellbeing of people and the appearance of places across the UK. www.communityrepaint.org.uk

You can even give your hair! If you have more than ten inches of hair cut off, take it home and donate it to make wigs for people who’ve lost hair due to medical treatments. www.charityintersection.com/donatehair.html orwww.littleprincesses.org.uk/donate/hair.aspx

The rest?  

  • Charity shops take clothes, books, records, CDs, DVDs and jewellery, and some take furniture and electrical goods. Remember to fill in a Gift Aid form.
  • Primary schools and nurseries can use all sorts of things for craft projects: fabric, knitting wool, rolls of wallpaper, old Christmas cards, jars and bottles. Just ask first.
  • Find a new home for almost anything on Freecycle and save it from landfill.www.freecycle.org
  • Lend it to people in your neighbourhood through http://www.streetbank.com
  • Sell it and donate the proceeds. Through the online marketplace eBay you can donate the proceeds from selling virtually anything to a charity of your choice. Secondhand books can also be sold through Abebooks http://www.abebooks.co.uk and Amazon.

But do check with the charity first. People donate real junk, so much so that aid agencies run an annual competition for Stuff We Don’t Want (#SWEDOW). Past winners have included second-hand knickers(!), and the 2.4 million Pop-Tarts® airdropped onto Afghanistan by the US government in 2002. Far from amusing tales, these items create costs for charities because they need storing and sorting, and simply become a hindrance. It’s not difficult to check that a charity needs an item before sending it.

Posted in Uncategorized | Leave a comment

Lessons during the decade since the Asian tsunami

This article first appeared in Third Sector

It’s 10 years this December since the Indian Ocean Boxing Day tsunami. We salute those who died, those who mourn, those who tended; and we celebrate those who’ve since sought to improve response to disasters and emergencies: they’ve been remarkably effective.

For doctors in unfamiliar situations, the first port of call is The Cochrane Collaboration, a huge set of high-quality reports that collate and synthesise the (reliable) evidence about what to do. Its Cochrane Reviews are produced by more than 34,000 researchers in 120 countries, most of whom do them voluntarily, coordinated by a small band of experts from a tiny office in a residential street in north Oxford.

The day after the tsunami, the former co-chair of the collaboration, Mike Clarke, realised that The Cochrane Library, where the reviews are published, was pretty unhelpful for disaster situations. Reports on fractures might assume you’re in a first-world hospital with several hours to spare per patient. You’re not, and you don’t: you’re in a makeshift field hospital with patients queueing up. Worse, relevant Cochrane Reviews are scattered, filed under umpteen categories, and you’ve got no time to search and a dodgy internet connection. And some reviews are paywalled.

Evidence Aid was born that day. Now based in Belfast, it compiles relevant reports from Cochrane and elsewhere so they’re easy to find. It creates new reviews if NGOs and medics feel existing literature is inadequate. Volunteers are creating 100-word summaries, adapting guidance to the situation: in surgery, for example, the evidence shows you’re as safe washing a wound with tap water as using expensive sterile saline; but if clean water has become precious after a disaster, or is dirty, use saline for washing. It’s obvious, really, but chaos allows no time for thinking.

It seems to be working: Evidence Aid’s advice has prevented the use of various treatments that sound plausible but are shown by the evidence to actually make things worse; NGOs contribute their ideas and requests; and advice went to the World Health Organisation within 24 hours of the Haitian earthquake in 2010.

A few months after the tsunami, a girl turned up in a clinic in Indonesia, apparently with measles – a surprise, because many agencies had worked to prevent measles after the tsunami. It transpired that she’d been vaccinated three times by several different organisations.

At the time, there was no common standard through which charities and government agencies could report publicly about their activities, so it was all but impossible to get data on what other agencies were doing. The International Aid Transparency Initiative was set up to make information easier to find and more useful, and thereby avoid these situations. The Department for International Development was the first entity to publish in IATI’s format (in 2011), since when 280 others have done so.

Much remains to be done. The response in Haiti is infamous for poor coordination and providing inappropriate aid: the International Olympic Committee funded a new stadium – at $18m! When Typhoon Haiyan hit the Philippines last year, many (myself and others) asked us not to help like we helped in Haiti.

DOI: I’m an unpaid advisor to Evidence Aid. Here’s why –>

Posted in Impact & evaluation | Tagged , , , , , , , | Leave a comment

What constitutes good evidence?

Lovely interview, about what constitutes good evidence, which donors is this relevant to, doesn’t requiring evidence impede innovation or encourage donors to focus on short-term outcomes, etc. (Gets into English after about 2 minutes.)

This is the Forbes article which I reference.

Posted in Uncategorized | Leave a comment

Getting people to give better

New initiative aims to get donors to give better

Many people look at getting people to give more. Giving Evidence and the Social Enterprise Initiative at the University of Chicago Booth School of Business are starting work looking at getting people to give better. We’re looking at (i)what good giving is, i.e., what donor behaviours produce the best outcomes, and (ii)how to persuade/enable/nudge donors to do those behaviours.

First, we’re developing a ‘white paper’, to be published early in 2015, to collate what is known about effective giving, what isn’t yet known, and what would be useful for researchers to find out. [The University of Chicago Booth School of Business was recently ranked by The Economist as the best business school in the world.] Then there will be a series of research on those topics.

Perhaps persuading somebody to give better has the same social effect as getting them to give more.  Perhaps it’s cheaper, quicker and easier to get them to give better than to give more. The way that people give matters: For instance, the cost of raising capital for charities is about 20-40 per cent, against only about 3-5 per cent for companies, and charities turn away some donors who are fiddly to deal with.  Plus money doesn’t always go where it’s most needed: for example, about 90 per cent of global health spending goes on 10 per cent of the disease burden. And making many small gifts is demonstrably more wasteful than making a few large ones.

We aim to identify questions which non-profits, funders and other practitioners want answered about making giving better, and to encourage researchers to address them. Those questions include the following:

  • How do various donors (including foundations, corporates, individuals) define a ‘successful gift’?
  • Is success affected by (eg):
    • being hands-on?
    • donors working together (eg in giving circles)?
    • gift size?
    • how and whether the grant is tracked?
    • whether the donor gives, lends or invests?
  • What does it cost to raise and manage grants of different sizes
  • How and when can one influence the cause that a person supports?
  • How do donors choose causes, charities or grantees, and how influence-able is that?
  • How do donors choose processes (eg for sourcing grantees, selecting which to support)?

However one defines success for a grant, it would be useful to know (wouldn’t it?) whether and when and how the chance of success is affected by how the donor gives.

Our purpose is to identify questions which non-profits, funders and other practitioners would like to have answered, which would help make giving better, and to encourage researchers to address them.

In terms of scope, we’re looking at all giving: ‘retail’ individuals, endowed foundations, fund-raising foundations, private family foundations, companies – the lot.

Do get involved!

Please send relevant material to jo [dot] beaver [at] giving-evidence [dot] com

Feedback from readers suggests that an example might help here. We’re interested in what makes for successful giving. So if a particular donor or funder has data on the success rate of its grants (ie., the no. which ‘succeed’, on whatever measure of success that donor uses) and how that success rate varies with (things like) grant size, grantee size, how the donor came across that organisation (e.g., open application process, in the pub, network), how hands-on the grant was, duration, whether it was co-funded… we’re VERY interested in that.

We’re not at this stage (for the white paper) doing primary research (eg., working with funders through their historical grants, assigning ‘success scores’ to them & cross-tabbing that with things like size). We do expect to do primary research in future.

To be clear, this project isn’t (just) about getting donors to choose high-impact charities. It’s also about all the other choices which major donors /foundations make which can have just as much impact, and indeed can anhiliate the impact of their grant. A simple example of the effect of how one gives (as opposed to where one gives) is that funders sometimes create so much work for grantees that the grantee would be better off without that grant/relationship at all.  In that case the choice of charity doesn’t matter much!

So questions we’re looking at include: should donors give individually or in groups? should they proactively search out grantees or let grantees find them? how engaged should they be? how many focus areas should they have? That is, which giving behaviours (of those types) drive the success of grants -in whatever way the donor defines success.
There’s no shortage of opinions on these topics, but we’re looking for data.
The Shell Foundation published data on the %of its grants which succeeded when it was, various, spray-and-pray, somewhat focused, and latterly very focused. That’s what we’re after: some empirical basis for ascertaining what makes for effective philanthropy. Obviously the ‘right’ answer may vary between circumstances, just as the ‘right medical treatment’ depends on the patient’s condition, and those variations are interesting too.
Posted in Uncategorized | Leave a comment

Caroline Fiennes: best philanthropy advisor

Newsflash! Giving Evidence’s Caroline Fiennes has been nominated a ‘best philanthropy advisor’ by Spears Wealth Management magazine, here.

The profile of Caroline (here) says: CF Barcelona

“Caroline Fiennes’ work in philanthropy focuses on making giving as effective as possible by basing it on sound evidence. A physicist in a previous career, Fiennes became interested in the fact that some charities are better than others and wanted to figure out which ones are most effective in order to guide donors to them. This is also true of ways of giving.

The founder of Giving Evidence feels there is ‘often a big mismatch between where the money goes and where it’s needed’, and advises clients on using the available evidence to choose issues and organisations to focus on and support in the most effective ways.

Caroline works a lot on the quality of research available to donors, because charities produce a lot of information, but much of it is of too low a quality to be reliable. This has led to some of her clients giving funds to help produce better evidence.

Caroline and Giving Evidence are working on creating a mechanism for anybody to rate a charity with which they’ve had contact, a little like TripAdivsor or Toptable. This ‘opening up of reputations’ would greatly help donors to make much more informed decisions.”

Stay in touch to hear more as this project progresses!

Why is rating charities important?—>

Posted in Uncategorized | Leave a comment