Systematic review of evidence to inform funding practice: outdoor learning

What is known about what works and what doesn’t? What can we learn from the existing literature and experience of other organisations about what works and what doesn’t – and for whom and in what circumstances – which can help us make better funding decisions?”

These questions are the genesis of a study of outdoor education for 8-25 year olds commissioned by the Blagrave Trust, a family foundation which supports disadvantaged young people in southern England. Giving Evidence is working on the study in partnership with the EPPI Centre (Evidence for Policy and Practice Information and Co-ordinating Centre) at UCL, which routinely does systematic reviews of literature in particular areas to inform decision-makers.

We’re excited to do this project because those original questions, and this way of answering them, pertain to any area of charitable funding, service delivery or social policy. The charitable sector focuses a lot on doing monitoring and evaluation – i.e., producing research – but is weirdly less unconcerned about using research already created by others. We’ve written about this before, and will do so again: using systematic reviews of the existing literature could save a lot of time and money and significantly upgrade performance. There seems to be appetite in the outdoor learning sector to hear and heed the findings from research.

Study aims and logic

This particular study will:

  • Categorise the various outdoor learning activities in the UK, in order to give funders a more coherent sense of the sector as a whole and see their options;
  • Identify the various outcomes which organisations running those various activities are measuring, i.e., what providers seem to be seeking to achieve; and
  • Assess the designs of individual evaluations and the standard of evidence offered in total for different types of outdoor learning.

Such a study can be very valuable. Most obviously it can guide providers and funders to the most effective interventions.

It can also guide the research effort within the sector, and hence reduce research waste. In many sectors, donors and operators collectively spend a lot on research but with no co-ordination about where it is spent. Often the spend goes where activity is the greatest. By contrast, it is better to identify areas which most need additional research. That depends in part on the ‘evaluate-ability’ of the various interventions, i.e., whether they are ready to be evaluated. This is useful because very often, NGOs and others evaluate work:

  • before it is ready: when it’s still in a pilot stage and too unstable for an investigation of its causal effects. Pilots should just be monitored and the beneficiaries consulted. And/or
  • after evaluation ceases to be useful: because the intervention has been adequately evaluated and its effects known. By analogy, we don’t evaluate all medical drugs forever: eventually we’re pretty confident that we understand a drug’s effect and the trials stop. 

The priorities for research are interventions which are stable enough to evaluate, and of course in which enough people are interested, i.e., where the research findings could influence enough activity to be worthwhile.

Third, by assessing the quality of existing research, we hope to raise awareness of research quality – i.e., that some types of study are more reliable than others. Poor quality research normally means that the design doesn’t allow researchers to distinguish the effects of the programme from other factors and from chance.

Both these problems – evaluation at the wrong time, and bad evaluation – are major causes of research waste.

Timing and getting involved

We expect to work on this until the Autumn. We will then publish the findings and work with the Blagrave Trust and others to mobilise players in the sector around them.

We’d love to hear from you if you have studies about outdoor learning in the UK which looks at the causal connection between activities and clearly-stated outcomes. Please send to admin@giving-evidence.com

Posted in Effective giving, Great charities, Impact & evaluation | Leave a comment

Don’t wish for a giving culture like the US

This In February, Mark Zuckerberg, the Facebook founder, made the largest- ever single gift to a US hospital – $75m (£49m) to a San Francisco institution. We often hear that the charitable sector in the UK should emulate the giving culture in the US. Well, we should be careful what we wish for: it’s far from clear that this would be of any help at all. 

Most obviously, the UK and the US are very different countries. Perhaps the comparison arises only because we more or less share a latitude and more or less share a language. Although US giving per capita looks higher, it’s counting something completely different.

For example, in upstate New York I was amazed to see a road sign saying that the next three miles of highway were being cleaned “courtesy of the Rotary Club”. By contrast, the vast majority of roads in the UK are cleaned courtesy of central and local government, funded by the taxpayer, whether there is a local Rotary Club or not.

In the US, the funds made available for that road cleaning count as charitable giving, whereas the money “given” to the taxman to clean UK highways does not. This does not show that the US is more generous, just that it allocates tasks differently between the state and private citizens. By the same token, charitable giving looks low in Scandinavia and France, though nobody would sensibly claim that these are thus ungenerous nations.

The comparison with the US overlooks where charitable funds in that country actually go. Years ago, I read an analysis [which I now can’t find] of US charitable giving to churches, synagogues, universities and so on. It called them “communities of which the donor is a member”, giving to which accounted for almost all the per-capita differences between US and UK giving.

It’s clear that US elite institutions receive a good deal: the endowments of Harvard University ($32bn), Yale University ($20bn) and Stanford ($18bn) are testament to that. By comparison, Oxford University’s endowment is about $6bn.

Less striking is the effect of that on the poor: the literary and cultural commentary magazine The Atlantic recently reported that of the 50 largest individual gifts to US charities in 2012, 34 went to educational institutions that mainly serve the elite; nine went to museums and arts organisations, and the rest went to medical facilities and fashionable charities like the Central Park Conservancy. “Not one went to a charity that principally serves the poor and the dispossessed,” the magazine said.

Similarly, much US giving goes to addressing problems that the UK doesn’t have. Schools in the US are funded locally by property taxes, based on property values, so schools in poor areas get much less per child than schools in expensive areas. The director of a large US community foundation told me about its expensive campaign to get equal funding for every child in his city’s public schools. Donors in the UK don’t have to lobby for that because school funding here works quite differently.

Clearly, the US can teach us some things, but not everything. After all, the country where the greatest number of people give regularly is even more different. It’s Myanmar.

This article first published in Third Sector.

Posted in Uncategorized | Leave a comment

Kids Company shows a general problem

Dreadful practice at the well-known charity Kids Company – around services, governance and evaluation – are exposed in The Spectator and by Genevieve Maitland Hudson here.  

Evaluations of a charity’s work are the main tool by which public donors and taxpayers can know if a charity is doing a good job. But these are normally conducted and/or funded by the charity itself which creates two major problems: first, charities have an obvious incentive to present themselves favourably; and second, most charities lack the research skills to run evaluations well.

This matters. The ‘answer’ from an evaluation depends markedly on how well the research was done. A National Audit Office study found that positive claims about government programmes often come from ropey evaluations whereas robust evaluations only allow for modest claims. And good  evaluations usually cost more than bad ones. So it’s hardly surprising that one of the few reviews of the reliability of charities’ evaluations – by the Paul Hamlyn Foundation – found that 70% are not ‘good’.

One complaint of the lady who sold her house to raise funds for Kids Company is that the evaluation report is completely unclear: “Five of its 11 pages were simply photographs of children”. This is why Giving Evidence is working to improve and standardise charities’ research reports, creating a ‘checklist’ of items that they should all contain. This idea comes from medicine where these ‘reporting checklists’ have dramatically improved doctors’ ability to make good decisions between treatments.

Indeed Giving Evidence suspects that many unflattering charity evaluations never get published at all. This ‘publication bias’ problem is rife, well-documented and fatal in medical research, and hence is getting fixed.

This isn’t solely charities’ fault. Charities are responding rationally to the badly-designed incentives they face. Incentives mainly designed and exercised by donors.

By analogy, medicine has made huge strides by improving the quality, clarity, findability, comparability and use of evaluations of drugs and devices. Work is underway to make analogous improvements in the charity world.

We – as donors and taxpayers – should demand, encourage and fund better evaluations.

The evidence-base about charities could get a lot better. Here’s how —>

Posted in Uncategorized | Tagged , , | 1 Comment

Non-publication of charities’ research: groundbreaking new project!

This was first published by our friends at Evidence Matters.

It’s hard to make evidence-based decisions if much of the evidence is missing, ropey, unclear or you can’t find it. Charities produce masses of evidence about their effectiveness but Giving Evidence suspects that much of it  is wasted because of these four problems.

Research that is poor or hidden damages beneficiaries in two ways. First, donors and other operational charities can’t see what works and therefore what to fund or replicate, so may implement something avoidably suboptimal. And second, the research consumes resources, which could perhaps be better spent on delivering something that does work.

Hence Giving Evidence is this very week starting to study non-publication of research by charities. We aim to understand the extent and causes of non-publication, in order to see what might fix it. Though non-publication of research is a known and major problem in science and in medical research, our project is, to our knowledge, the first ever study of non-publication of charities’ research.

We know that much charity research is unpublished: when I was a charity CEO, we researched our impact, and when the results were good, we published them, and when they weren’t we didn’t. I’d never heard of publication bias (of which this is an egregious example) but I had noticed that bad results make for bad meetings with funders. In our defence, we weren’t being evil: we were just responding rationally to badly-designed incentives.

We suspect four reasons that charities don’t publish their research.

  • First, incentives, as outlined. The system is that charities evaluate themselves and use the results in to raise funds. The system is so obviously flawed that when I spell it out at conferences – even of seasoned professionals in this sector – everybody laughs.
  • Second, charities may think that nobody’s interested. By analogy, a campaign in the UK to get grant-makers to publish details of all their grants (which few do) found that many foundations were open to doing this but simply hadn’t realised that anybody would want them.
  • Third, it’s unclear where to publish even if you want to: there are few repositories or journals, nor standard ways of ‘tagging’ research online to make it findable, so charities may (rightly) think that the traffic to material just on their own websites won’t justify the work in sprucing up the research to publish it.
  • Fourth, commercial confidentiality given that many charities compete for government contracts. We suspect that the issue here is a bit different from that in pharma: charities’ interventions are rarely patented, so the confidentiality is around the details of their intervention. That’s the secret sauce on which they compete.

This first study focuses on UK charities supporting people with mental health issues: what research do they do, what do they not publish, and why not. The initial budget isn’t big enough for us to get into publication bias (is the published material different from the non-published material?) nor research quality – though we hope to look at both eventually.

We’ll report back on what we find.

So that’s missing research. We’re also looking at the other three problems – of research being ropey, unclear and unfindable – in other sectors. It’s thought that these four problems result in fully 85% of all medical research being wasted. We need to know if and where charitable resources are being similarly wasted, and make haste to fix it.

Giving Evidence’s work is, we think, making important steps.

This talk describes what we’re doing and why:

Posted in Effective giving, Fundraising, Impact & evaluation, meta-research | Leave a comment

Donors don’t care much about impact! (say the data)

This article first published in Third Sector.

There has been a huge rise in interest recently in the impact charities have, so it’s remarkable that only now are we seeing rigorous evidence emerging about whether donors actually care. It’s a mixed picture.

A paper published last year reported on an experiment with a US charity, Freedom From Hunger. It divided its donor list into two random groups. Those in one group received a conventional solicitation with an emotional appeal and a personal story of a beneficiary, with a final paragraph suggesting that FFH had helped that beneficiary. Those in the other group received a letter identical in all respects – except that the final paragraph stated (truthfully) that “rigorous scientific methodologies” had shown the positive impact of FFH’s work. 

Donations were barely affected. The mention or omission of scientific rigour had no effect at all on whether someone donated. It also had only a tiny effect on the total amount raised. People who had supported that charity infrequently were not swayed. However, people who had previously given ‘a lot’ – more than $100 – were prompted by the material on effectiveness to increase their gifts by an average of $12.98 more than those in the control group. On the downside, people who had previously made frequent gifts of less than $100 became less likely to give and also shrank their average gifts by $0.81 – all told, the net effect was about nil. But on the upside, this implies that more serious donors will give more if they are presented with decent evidence of effectiveness.

A separate study in Kentucky looked at whether donors give more when there is an independent assessment of the charity’s quality. Donors were each approached about one charity from a list; each charity had been given a three or four-star rating (out of four) by the information company Charity Navigator. Half the donors were shown the rating; the other half weren’t. The presence of the ratings made no meaningful difference to their responses.

The third study has not yet been published, but is perhaps the most telling. It was a multi-arm, randomised, controlled test in which a large number of US donors each received appeals from one charity out of a set of charities that had various Charity Navigator ratings. Half of the appeals included the charity’s rating; the other half did not.

The overall effect of presenting the information was to reduce donations. Showing the ratings brought no more benefit to the high-rated charities than not showing them. For charities with a rating of less than four stars, showing the rating reduced donations; and the lower the rating, the more it reduced donations.

Donors appeared to use evidence of effectiveness as they would a hygiene factor: they seemed to expect all charities to have four-star ratings, and reduced donations when they were disappointed – but never increased them because they were never positively surprised.

Three swallows don’t make a summer, of course, so there’s much more to know about donor behaviour. Even if it transpires that donors really don’t care, our constituents do – hence, so must we.

Contribute to work with the University of Chicago to better understand donor behaviour —>

Posted in Donor behaviour & giving stats, Fundraising, Impact & evaluation | 2 Comments

Enabling Make Better Decisions: Meta-Research to the Rescue!

This article was first published by our friends at The Life You Can Save.

It’s hard to make evidence-based decisions if much of the evidence is missing, ropey, unclear, or you can’t find it. This has become Giving Evidence’s unofficial slogan as we aim for charitable giving to be based on sound evidence.

Charities produce masses of evidence about their effectiveness. Evidence is a key component of how organisations like The Life You Can Save, GiveWell, and foundations assess charities. But much of that research is missing (unpublished), ropey (uses poor research methods), unclear (so you can’t tell whether it’s ropey or not), or is hard to find because it’s only published on the website of an organisation you’ve never heard of. (There are virtually no central indexed repositories.) 

This damages beneficiaries in two ways: first, donors and other operational charities can’t see what works and therefore what to fund or replicate, so may implement something avoidably suboptimal; and second, the research consumes resources which could perhaps be better spent on delivering something which does work. Hence Giving Evidence works to increase quality, quantity and availability of research.

Giving Evidence is just now starting to study missing (non-published) research by charities: our new project on this topic is, to our knowledge, the first ever study of whether and why charities’ research is unpublished. We already know that much charity research is unpublished. When I was a charity CEO, we researched our impact, and when the results were good, we published them, and when they weren’t we didn’t. I’d never heard of publication bias (of which this is an egregious example) but I had noticed that bad results make for bad meetings with funders…which led to us having to cut staff. In our defense, we weren’t being evil. We were just responding rationally to badly-designed incentives. Fewer than half the US foundations surveyed who conduct evaluations publish them. We also know that non-publication of research is a major problem in science and in medical research.

We suspect three reasons why charities don’t publish their research. First, incentives, as outlined. Second, they may think that nobody’s interested. By analogy, a campaign in the UK to get grant-makers to publish details of all their grants (which few do) found that many foundations were open to doing this but simply hadn’t realised that anybody would want them. And third, it’s unclear where to publish even if you want to. There are few repositories, journals, or standard ways of ‘tagging’ research online to make it findable. So charities may (rightly) think that the traffic to material published exclusively on their own websites won’t justify the work in sprucing up the research to publish it.

This first study focuses on UK charities supporting people with mental health issues: what research do they do, what do they not publish, and why not. The aim is to figure out what could be done to get more of it published – and by whom.  We’re interested in, for example:

  • How much research is unpublished? We’ll try to estimate the proportion of the research budget whose fruits are never publicly available.
  • Is published research consistently more positive than non-published research? That would suggest the incentive problem to which I personally fell prey.
  • Does the chance of publication depend on whether the research is done in-house versus by an outsider? Or depend on who pays for it? Possibly some funders prevent charities from publishing research.

On research by charities being hard to find and unclear, Giving Evidence is working with charities in criminal justice. We’re creating a standardised, structured abstract to sit atop any research report by charities (detailing, for example, what the intervention was, what kinds of people were served and where, what the research was (sample size, how they were selected), what outcomes were measured, the results, the unit cost). This borrows heavily from the checklists for reporting medical research which are thought to have markedly improved the usefulness and quality of medical research. We’re also looking at creating, not a central repository as such, but open meta-data to allow charities to tag their research online and a central search ‘bot’ (rather like this) through which donors, charities, practitioners, and policy-makers can rapidly find it.

And on charities research being ropey, we’re working with a foundation to assess the quality of research that their grantees are producing – requested by that foundation and requested by other funders and initiated by the charity themselves. Quality of charities’ research has also barely been researched. We know of just one study, by a UK foundation which found that about 70% of research it received from grantees was what it called ‘good’ and some appeared to be totally fabricated.

Medicine has made great strides by enabling front-line practitioners to make decisions based on sound evidence – since in their world, like ours, the right best course of action isn’t always evident. Hence they devote considerable resource to figuring out how much research is ropey, and why, and fixing it. They have whole teams devoted to improving research reporting, to make it clearer. Other teams look at ‘information infrastructure’ to ensure that evidence can be rapidly found; and many people study non-publication and selective publication of clinical research and work on rooting it out. Thus meta-research – research about research – is essential to improving decisions. Far from just technical and dry, good meta-research can help improve real beneficiaries’ lives.

It’s thought that fully 85% of all medical research is wasted – on research which goes ‘missing’, is too ropey, unclear or unfindable. We need to know if and where charitable resources are being similarly wasted, and make haste to fix it. Giving Evidence’s meta-research and work on the information infrastructure are, we think, important steps.

We’ll report back later on what we find.

The issues and our work:

Posted in Uncategorized | Leave a comment

Charities should do much less evaluation

Stand over there, would you, while I throw this wellington boot. I want you to see how well I throw it. Pay attention: you need to judge me on my welly-throwing. Oops, that throw wasn’t very good! Let’s not count that. Ah, the second throw was better. OK, now my assistant will measure how far it went. No – him, not you. It’s actually quite hard to measure it properly – the tape has to be taut, so I have to secure it in the ground here – and I’ve not learnt to do that properly. Anyway, a bit of slack is all to the good! We’ll use this tape-measure which we made: it uses a special unit of distance which we invented.

****

This, I suspect, is uncomfortably close to how charities’ monitoring and evaluation work. Charities get judged on ‘evaluations’ which they themselves produce, for which they design measures, and they decide whether and what to publish. It appears not to help them much. If the aim is to improve decisions – by operating charities, by funders, by policy-makers – by enabling access to reliable evidence about what’s worth doing and what’s worth prioritising, then much of it fails: it’s just too ropey.

This article first appeared in Third Sector. A pdf of it is here

This needs to stop. It wastes time and money, and – possibly worse – pulls people towards bad decisions. My aim here isn’t to just bitch, but rather to honestly present some evidence about how monitoring and evaluation actually works currently, and make some suggestions about creating a better set-up.

Why are we evaluating?

When asked in 2012 what prompted their impact measurement efforts, 52% of UK charities and social enterprises talked about funders’ requirements. Despite being social-purpose organisations, the number which cited ‘wanting to improve our services’ was a paltry 7%[i].

A study by two American universities indicates the incentives which influence charities’ evaluations. In a randomised controlled trial, the universities contacted 1,419 micro-finance institutions (MFIs) offering to rigorously evaluate their work. (It was a genuine offer.) Half of the invitations referenced a (real) study by prominent researchers indicating that microcredit is effective. The other half of the invitations referenced another real study, by the same researchers using a similar design, which indicated that microcredit has no effect.

The letters suggesting that microfinance works got twice as many positive responses as those which suggested that it doesn’t work.[ii] Of course. The MFIs are selling. They’re doing evaluations in order to bolster their case. To donors.

Hence it’s little surprise if evaluations which don’t flatter aren’t published. I myself withheld unflattering research when I ran a charity (discussed here). Withholding and publication bias are probably widespread in the voluntary sector – Giving Evidence is starting what we believe to be the first ever study of them – preventing evidence-based decisions, and wasting money.

Bad method

If charities are wanting (or forced, by the incentives set up for them) to do evaluations which flatter them, they’re likely to choose bad research methods. Consider a survey. If you survey 50 random people, you’ll probably hear representative views. But if you choose which 50 to ask, you could choose only the cheery people. Furthermore, bad research is cheaper: surveying five people is cheaper than surveying a more statistically significant 200. A charity in criminal justice told me recently of a grant from a UK foundation “of which half was for evaluation. That was £5,000. I said to them that that’s ridiculous, and kind of unfair. We obviously can’t do decent research with that.

Charities’ research is often poor quality. The Paul Hamlyn Foundation, assessed the quality of research reports it received from its grantees over several years, and graded them: good, ok, not ok. The scale it used was much more generous than how medics grade research. Even so, 70% was not good.  Another example is the Arts Alliance’s library of evidence by charities using the arts in criminal justice. About two years ago, it had 86 studies. When the government offender management service look at that evidence for a review which had minimum quality standard, how many of those studies could it use? Four. The new ‘what works centre’ for crime reduction found much the same. It searched for all systematic reviews about crime reduction [systematic reviews compile all findable evidence above a stated quality threshold] and found 337. Giving Evidence asked about the contribution of research by charities to them, and they said it was ‘very small’. One charity CEO we interviewed recently blurted it right out: 

When I first started in this [sector], I kept talking about evaluation and he [senior person in the charity sector] said to me ‘don’t worry about that. You can just make it up. Everybody else does. At the very least you should exaggerate a lot. You’ll have to, to get funded.

“Ask an important question and answer it reliably”

This is a central tenet of clinical research. Though it sounds obvious, it isn’t what happens in our sector. On reliability, much research by charities fails as discussed. It’s inevitable because investigating causal links is hard. Most charities don’t have those skills. Given the fragmentation (the UK has 1475 NGOs in criminal justice alone) you wouldn’t want them all to hire a researcher.

And on importance, charities’ research often seems to fall short there too. 65% of CEOs of US foundations say that generating meaningful insights from evaluations is ‘a challenge’[iii].

The collective spend on evaluation in the US is 2% of total grant-making[iv]. That proportion of UK grants would be £92m. That’s easily enough for many pieces of reliable research, but split between loads of organisations and into pieces of £5000, it can only generate garbage. It’s as though we’re mountaineering, and everybody gets into the foothills but nobody reaches the summit. Everybody tickles the question but no-one nails it.

It’s wasteful and it should stop.

We need one other thing too. Almost all decisions are between options: this intervention versus that one, for example. To enable evidence-based decisions, evaluations must enable comparisons. So it’s no good if everybody designs their own tape-measure[1]: a survey of 120 UK charities and social enterprises found over 130 measurement tools in-play[v]. We need standardised metrics. These needn’t be some impossible universal measure of human happiness, but could be standardised within specialisms such as some types of mental health care, or job creation or back-to-work programmes.

Cite evidence, don’t produce it

When I get in an aeroplane, I do not wish my flight to be in a rigorous trial to conclusively prove whether the plane will stay up or not: I want to know that that’s been established already. If an intervention is innovative – say I’m having a new medical drug – then obviously it won’t yet have been fully evaluated, but it’s reasonable to ask that the practitioner can cite some evidence that this intervention isn’t bonkers: maybe it’s a variation on a known drug, or other research suggests a plausible causal mechanism.

We should do more of this in our sector. We should expect organisations to cite research which supports their theory of change; but we don’t need every single organisation to produce research.

Imagine that you’re considering starting a breakfast club in a school. Should you do an evaluation? The table below explains.

Answer: no! The first thing you do is look at the literature to see what’s already known about whether they work. To be fair, ‘the literature’ is currently disorganised, unclear and tough to navigate (hence Giving Evidence is working on that – more detail soon), but ideally you’d look at research by other charities and academics and others.

If that research is reliable and shows that the clubs don’t work, then obviously you stop.

If that research is reliable and shows that clubs do work, then just crack on. The evaluation has already been done and you don’t need to duplicate it: by analogy, we don’t expect every hospital to be a test site.  You can just cite that evidence, and monitor your results to check that they’re in-line with what the trials predict. (If not, that suggests a problem in implementation, which a process evaluation can explore.)

This of course is different to what happens now: in the model I’m suggesting, in the circumstances described, you will never have a rigorous evaluation of your breakfast club. Just as most cancer patients will never be in a rigorous trial, and you never want to be in one by an airline. But you will (a) have a sound basis for believing that your club improves learning outcomes (in fact, a much better basis than if you’d attempted an evaluation, like our friend earlier, with just £5000) and (b) won’t have spent any time or money on evaluation. Of course, this model requires funders, commissioners, trustees and others to sign up to the ‘cite, don’t necessarily produce’ model of research, which I realise isn’t trivial. They too would look for evidence before they fund, rather than looking just at the ‘monitoring and evaluation’ which emerge afterwards. The Children’s Investment Fund Foundation, for example, reviews the literature relevant to any application it’s considering.

Under this model, many fewer evaluations happen. Those few can be better.

If your literature search finds no evidence because it’s a novel idea, then look at relevant literature (see column on page x), run a pilot, as described, and if it works, eventually decide whether to do a ‘proper’ evaluation.

Then arise the questions of who does that evaluation and who pays for it. I don’t have all the answers (and am interested in your ideas), but a few ‘points of light’ are clear.

First, the evaluation shouldn’t funded by the charity. It’s a public good other people will use it too, so it’s unfair to ‘tax’ the first-mover by making them fund it. In international development, many institutions want to use reliable evaluations but few are willing to pay, so many of them are funded centrally as a public good, through the International Initiative on Impact Evaluation (3ie), essentially a pooled fund from the Gates Foundation, Hewlett Foundation and you, the UK tax-payer, through DFID. [As an aside, almost every sophisticated thing in international development has DFID involvement somewhere.]

Second, the budget for the evaluation has nothing to do with the size of the grant. If the question is important, it should be answered reliably even if that’s expensive. If adequate budget isn’t available, don’t evaluate at all: a bad answer can be worse than no answer and is just wasteful. Let’s not tickle questions.

Third, the evaluation shouldn’t be conducted by the charity – for the reasons of skill and incentives we’ve seen. The obvious answer is academics, but sadly their incentives aren’t always aligned with ours: their funding and status rest on high-profile journal articles, so (a) they might not be interested in the question and (b) their ‘product’ can be impenetrably theoretical (and may be paywalled). Several people in the last month have suggested that young researchers – PhD students and post-docs – with suitable skills may be the answer: some system to broker them in to charities whose work (genuinely) needs evaluating, rather as Project Oracle is doing with some charities in London.

Does evaluation preclude innovation?

No. You can tell that it doesn’t because the model in which charities cite research, but don’t always produce research, is essentially what happens in medicine where there’s masses of innovation. In fact, reliable evaluation is essential to innovation because reliable
evaluations show which innovations are worth keeping. They also show what’s likely to work. Few things are totally new. Most build on something already known. Suppose that you have a new programme for countering workplace gender discrimination. It relies on magic fairies visiting people at night. Well that’s interesting, because there’s no evidence of magic fairies in the whole history of time. Thus there’s no evidence to support the notion that this programme will work.

By contrast, suppose that your programme assumes that they will follow the crowd, shy away from complicated decisions and are weirdly interested in hanging on to things they
already own. Those three traits of human behaviour are very well-established – Daniel Kahneman was awarded a Nobel prize for proving the latter and substantial evidence for them all is in his book, Thinking, Fast and Slow.

At the outset, you won’t have any evaluations of your particular programme, but you can cite evidence that it’s not bonkers. We’re not talking here about proof, clearly, but rather
about empirically-driven reasons to believe. What gives you to think that it’ll work? What else is similar which works elsewhere? What assumptions does the programme make about human behaviour or organisations or political systems and what evidence supports those assumptions?

Hence the “cite research, don’t necessarily produce research” model reduces the risk of funding or implementing innovations which fail, and thereby wasting time, money and
opportunity. It allows us to stand on the findings of many generations and disciplines and, hence, see better whether our innovation might work. We might call this “evidence-based
innovation”.

On our guard
If there is no evidence, that doesn’t prove that the programme won’t work – but it should put us on our guard. The Dutch have a great phrase: “comply or explain”. If your innovative idea doesn’t comply with the existing evidence, then you have more explaining to do than if it does.

For example, to improve exam results, various economists handed schoolchildren a $20 note at the exam hall door. It sounds crazy. The students were told to hand the money back if they didn’t do well. Now, suddenly, it sounds sensible. This innovation is informed by Kahneman’s finding that people will work hard to retain something they already own – harder than they would work to gain that thing in the first place.
Context is, of course, important. Perhaps the evidence came from a time or place that is materially different and hence doesn’t apply – or, at least, requires a bit of translation to here and now. Hence, innovations might be evidence-informed, rather than proven.

And once your new gender programme is running, we need to see whether it really works – not just whether it looks as if it’s working. For that, we need rigorous evaluations.

___________

What’s evaluation, what isn’t and what to do when?

“Evaluation is distinguished from monitoring by a serious attempt to establish causation”, says Michael Kell, chief economist at the National Audit Office.

That research is not needed all the time. For service delivery, the types of research which are useful at various stages of a programme’s development are as follows, taking the example of a school breakfast club:

Stage of programme development Purpose of the stage, and useful information to gather Application to breakfast club
Pilot Establish if the programme is feasible, if there is demand, the resource requirements (time, people, cost), management challenges and costs.Type of research: monitoring. How much cereal is needed, do children and parents want it, how many staff and time are needed to wash up, how much it all costs?
Test Now that the programme is stable and manageable, investigate whether the inputs cause the intended outcomes.Type of research: evaluation, ideally rigorous e.g., with an equivalent control group, and conducted and funded independently. Most programmes need several evaluations, in diverse circumstances. (How) does a breakfast club improve learning outcomes?
Scale-up/Delivering Services Now the programme is known to be effective and can be scaled up. We don’t need to evaluate it again, so can just monitor it to ensure that it’s working as expected.Type of research: monitoring. Are the changes in learning outcomes in-line with results from the trials? If not, something may be awry in implementation.Monitor beneficiary views, uptake, measurable results (e.g. test scores), and cost.

Monitoring and evaluation of research and development work, and of advocacy both work rather differently.

This table does not look at process evaluation, which is separate (and highly useful). That aims to understand whether the intervention was actually delivered as planned; variations in cost, quality, staffing etc.; and to identify operational improvements.

How to improve charities’ research–>

___________

[1] This remains a terrible problem in medical research. For example, a study of 2000 studies of schizophrenia found 640 different measurement instruments, of which 369 were used only once.

[i] Making an Impact: Impact Measurement Across Charities and Social Enterprises in the UK, NPC, October 2012

[ii] [i] Findlay, M. Aversion to Learning in Development? A Global Field Experiment on Microfinance Institutions. [Online] http://www.michael-findley.com/uploads/2/0/4/5/20455799/mfi_learning.22mar13.pdf [Accessed on: 24.09.14]

[iii] http://www.effectivephilanthropy.org/portfolio-items/the-state-of-foundation-performance-assessment/ page 8

[iv] http://www.effectivephilanthropy.org/portfolio-items/the-state-of-foundation-performance-assessment/ page 8

[v] http://inspiringimpact.org/2012/10/24/measuring-the-market/

Posted in Uncategorized | 3 Comments

How do you make people give more? Research in the US has some surprising messages

In the US, individual charitable giving is much vaunted, but it’s flat. Once you adjust for inflation, it’s been between 2 per cent and 2.2 per cent of income for more than 30 years. Identifying how to increase giving is the focus of research by the University of Chicago’s economics department; and since it has more Nobel Prizes than any other, it’s worth listening to. I went to hear its findings, and these are some snippets. 

Being near the finish line helps. The Center for Environmental Policy Analysis at the University of Central Florida asked 3,000 Florida households for funds for computers. The letters said, variously, that Cepa already had 10 per cent, 33 per cent or 67 per cent of the amount needed. The effect was huge – the “67 per cent” letters raised six times more funds than the “10 per cent” ones, and the former received more than twice as many responses (8.2 per cent) as the latter (3.7 per cent).

Words are important. In one experiment, people could buy doughnuts from what was obviously a charity fundraising stall and could choose the amounts they paid. When the transaction was framed as a payment, the average exchange was $1.60; but when framed as a donation, it went up by nearly a third to $2.10. Some gave the latter without taking the doughnut at all.

In an experiment carried out by the Public Broadcasting Service, a non-profit US TV broadcasting network, some people were offered the gift of a PBS-branded pen. Perhaps surprisingly, people gave more when nothing was offered. The same happened with a solicitation for Save the Children: some people received a normal ask, whereas others were told they would be entered in a prize lottery. The average gift from the lottery group was $26; from the non-lottery group it was $32.

The importance of words comes up again in an experiment in which people were offered an item and asked to pay either what they wanted for it, or what they could. The “pay what you want” message produced an average exchange of 64 cents, whereas “pay what you can” gathered more than a quarter more, with an average of 82 cents.

This doesn’t necessarily make for loyal donors. In an experiment at Yale, students earned money for doing a dull task that they could either keep or donate to one of several charities. Of those who donated, only 55 per cent could recall afterwards the name of the charity they chose.

A few things are striking about the broad set of experiments from which these examples Randomize_That_Stamp_3are drawn. First, none of them looks at whether giving increases overall. Second, they’re all randomised controlled trials. That method still seems to be controversial in relation to programmes, yet it’s well entrenched on the fundraising side. Third, they focus only on individual donors. Major donors, foundations and corporations might behave differently, and studies are brewing to investigate that. Fourth, the examples are all from the US, so results may or may not apply here.

Lastly, of course, impact doesn’t necessarily follow inputs. It might be quicker and easier to get a donor to give to an organisation that achieves twice as much per dollar than to get them to give twice as much. We’re on the case there, too.

This article first published in Third Sector.

How can donors know if a programme is working?—>

Posted in Donor behaviour & giving stats, Fundraising | Leave a comment

Charities should do fewer evaluations; those few can be better

It’s hard to make evidence based decisions if much of the evidence is missing, garbage, unclear, or you can’t find it. Talk given in Barcelona (18 mins)

More examples of important evidence being missing or garbage–>

What Giving Evidence is doing to make charities’ evidence clearer and easier to find--> 

Posted in Uncategorized | Leave a comment

Give Your Best, this Giving Tuesday

Caroline Fiennes explains how to maximize the effect of your donation, even if you have no money at all. [This article was first published by GivingTuesday.]

The basics which you must know about charities before you start

Some charities are miles better than others 

This sounds rather heretical because we often think that all charities are good. But we also think that teaching is good, and so is providing medical care, and yet we know that some teachers are better than others, some doctors, some treatments. It’s the same with charities, so your choice matters.

For example, in Kenya, where diarrhoea from dirty water is a major problem, delivering chlorine to households can prevent diarrhea for a certain cost, but giving people chlorine at the village water source achieves the same result for less than half that cost.

Similarly in North India, free village clinics are pretty good for getting children immunised. But if clinics offer mothers free lentils for every child immunised, immunisation rates increase more than six times.

And in Southern India – where I was once a teacher – children skip school a lot.  Giving the parents cash if their children show up (a respectable and widely used idea, called a ‘conditional cash transfer’) solves some of this, but giving out free school uniforms achieves ten times as much. And that’s peanuts compared to dealing with intestinal worms which many children there have: for the same price, ‘deworming’ can achieve 25 times as much.

Do the maths. Or cheat.  

The catch is that choosing wisely is hard, because charities rarely have these comparative data: a leaflet picturing a school child doesn’t indicate much about an organisation’s performance. So ignore the fundraising literature and look under the bonnet at what the charity actually does and whether it’s good at it. Making a difference depends on having a good ‘idea’ (strategy) and implementing it well. If you’re into formulae, think of it as: impact = idea x implementation.

In fact, since choosing wisely can be fiddly and laborious, find somebody smart and copy their homework! The charity world includes two types of people who’ve already done their homework in detail. First, are independent analysts. GiveWell analyses charities in great detail and only recommends about 1% of the charities it assesses. They’re all in international development, mainly public health. Charity Navigator* is much broader, publishing analysis on several thousand US-registered charities. Its ratings look at the charities’ performance on financial criteria, and transparency and accountability, and it’s adding information about their results. Global Giving is ‘an eBay’ for international development, and lists many grass-roots organisations which it has vetted.

Second, many charitable trusts and foundations employ people to analyse charities to decide which ones the trusts should fund. Some (but not all) of them are robust, and you’ll be pretty safe supporting charities which they back. You only have to find one whose interests match yours. If you’re interested in creating jobs in the US, look at F.B.Heron Foundation; if it’s poverty in New York City, look at the Robin Hood Foundation, for international development, look at Hewlett Foundation. A good signs is when a foundation publishes a sensible-looking strategy and criteria.

Don’t look at administration costs

People often think that low admin costs are a good sign. It turns out that they’re not. The costs which get shown in a charities accounts (and I wrote a whole book once about charities’ accounts, so I know!) include all kinds of useful things like systems to monitor results and evaluate what’s working and to make improvements. It’s more accurate to think of them as management costs: and so it’s rather unsurprising thatanalysis shows that charities with higher ‘admin’ costs tend to perform better.

But I don’t have any money 

Well then, rustle up money for charity from thin air! Try variants of these ideas:

Friends. Starfish, a charity which helps HIV/AIDS orphans in South Africa, is supported by young professionals in the West. They hosted dinner parties in their homes and got guests to donate to Starfish the money which they would have spent if the party had been in a restaurant: money which nobody had earmarked for charity.

Neighbours. Fred Mulder lives in London, UK, and was in a dispute with his neighbors over access to some land that he owns. Rather than all hire expensive lawyers to resolve it, Fred offered to give his neighbours perpetual access if they each (Fred included) donated £25,000 towards an educational charity in Zambia. This generated over £100,000 for charity and improved the neighbours’ relationship, which a legal fight never would have done.

Clients. Fred Mulder is full of these ideas. He’s an art dealer, and sometimes when negotiations with clients get become stuck, he suggests that the difference between his price and the offering price be donated to charity.

Bulk purchasing. A financial services company in a medium-sized British town includes various charities in its IT purchasing processes, so they benefit from the company’s volume discounts.

Hotel toiletries: Some business people who travel a lot give the complimentary toiletries from hotels to a domestic violence refuge. For people on the run from a violent partner, it’s nice if somebody’s provided some decent shampoo.

Things to give which aren’t money

Blood Find your nearest blood donation session at http://www.blood.co.uk/SessionSearcher/search.aspx

Bone marrow Some tissue types are more common in certain ethnic groups of the population, meaning that a patient normally needs a donor from a similar ethnic background to her own. There’s a particular need for stem cell donors from African, African-Caribbean, Asian, Chinese, Jewish, Eastern European and Mediterranean communities. You can register as a bone marrow or stem cell donor when you give blood or at http://www.nhsbt.nhs.uk/bonemarrow/

Business clothes Disadvantaged women trying to get back into work need business clothes – as well as training and confidence – for interviews and when they start work. Dress for Success works in nine countries, and has now helped over 550,000 women. http://www.dressforsuccess.org

Cars Several organisations will collect an unwanted car and turn it into money for charity through http://www.giveacar.co.uk.

Computer equipment Which? has a useful guide to recycling computers http://www.which.co.uk/environment-and-saving-energy/environment-and-greener-living/guides/recycling-computers/pc-recycling-tips/

Coupons and free stuff. You can donate the buy-one-get-one free items you don’t want. I know some business people who are constantly travelling and give the complimentary toiletries from hotels to a domestic violence refuge: if you’re on the run from a violent partner, it’s nice if somebody’s been thoughtful enough to provide some decent shampoo.

Cycles A number of nonprofit organisations refit unwanted bicycles to send to countries such as Haiti and South Africa, in the process training people in the UK to repair bikes. http://www.re-cycle.org and http://www.recyke-y-bike.org

Furniture The Salvation Army will take furniture to sell in its shops or pass on to homeless people settling in a new home. http://www.salvationarmy.org.uk

Gardens Landshare brings together people who want to grow their own food but have no place to do it and those who have land to share but lack time, experience or muscle-power. www.landshare.net

Glasses Visionaid Overseas organises a nationwide recycling scheme for old or unwanted spectacles. http://www.vao.org.uk

Musical instruments can go to school music programmes, senior citizens, talented young students, community groups, and charities can use them at events and as prizes to help raise money.

Paint Community RePaint schemes collect unwanted, surplus paint and re-distribute it to individuals, families and communities in need, improving the wellbeing of people and the appearance of places across the UK. www.communityrepaint.org.uk

You can even give your hair! If you have more than ten inches of hair cut off, take it home and donate it to make wigs for people who’ve lost hair due to medical treatments. www.charityintersection.com/donatehair.html orwww.littleprincesses.org.uk/donate/hair.aspx

The rest?  

  • Charity shops take clothes, books, records, CDs, DVDs and jewellery, and some take furniture and electrical goods. Remember to fill in a Gift Aid form.
  • Primary schools and nurseries can use all sorts of things for craft projects: fabric, knitting wool, rolls of wallpaper, old Christmas cards, jars and bottles. Just ask first.
  • Find a new home for almost anything on Freecycle and save it from landfill.www.freecycle.org
  • Lend it to people in your neighbourhood through http://www.streetbank.com
  • Sell it and donate the proceeds. Through the online marketplace eBay you can donate the proceeds from selling virtually anything to a charity of your choice. Secondhand books can also be sold through Abebooks http://www.abebooks.co.uk and Amazon.

But do check with the charity first. People donate real junk, so much so that aid agencies run an annual competition for Stuff We Don’t Want (#SWEDOW). Past winners have included second-hand knickers(!), and the 2.4 million Pop-Tarts® airdropped onto Afghanistan by the US government in 2002. Far from amusing tales, these items create costs for charities because they need storing and sorting, and simply become a hindrance. It’s not difficult to check that a charity needs an item before sending it.

Posted in Uncategorized | Leave a comment