Some grant decisions should be made at random(!)

Don’t laugh – the notion that grants should be given at random rattled around when the National Lottery was set up over 20 years ago. The joke was that since prize-winners are chosen at random, maybe grant-winners should be too.

Perhaps we should resurrect the idea. The medics have studied it. Australian health economists looked at every grant application to the National Health and Medical Research Council of Australia in 2009 – all 2,085 of them – and analysed the scores given by the expert panels that assessed applications.

Now, if there’s one thing we know about experts it is that they’re not very good. For example, the US National Academy for Sciences showed that judges’ decisions about imprisonment varied dramatically and predictably, depending on whether the decision was made before or after lunch. The Nobel laureate Daniel Kahneman reported how, given the same picture on different days, radiologists contradict themselves 20 per cent of the time, as do stock market analysts, pathologists and many others. Extraneous factors often hold sway.

The Australian health economists put a margin of error around the experts’ scores to account for extraneous factors such as these. They found that 80 per cent of proposals were ‘never funded’ – those applications would get binned even with extraneous variations in their favour; only 9 per cent were ‘always funded’ – they scored highly enough for extraneous variations not to sabotage them; and nearly a third (29 per cent) were ‘sometimes funded’ – their fate depended on how supposedly irrelevant factors happened to play out that day.

The health economists note that “strong and weak grant proposals should be identified consistently, but most are likely to occupy a tightly packed middle ground”. This study – the only one of its type I’ve seen, and I’ve looked hard – showed there was still what the authors called “a high degree of randomness”.

So let’s formally introduce randomness – just for the applications that are neither stars nor duds. It might save considerable time and, therefore, money. It is also honest: a 1998 US paper called for some random grants because, it said, “instead of dodging the fact that chance plays a big part in awarding money, the (random allocation) system will sanctify chance as the determining factor”. 

Grant-makers and their trustees and panels might hate this idea because it reduces their decision rights. But it might also reduce “grant rage” from rejected applicants and hence reduce aggravation in their jobs. It might have secondary benefits too, such as discouraging grant-seekers from hanging around at drinks receptions hoping to suck up to grant officers or trustees who might influence their case, because they’d know there was a good chance of a decision being random anyway.

We all know that grant funding is scarce. The experience of running randomised evaluations in less developed countries – in most of which some people get something and others don’t – is that people who are accustomed to scarcity value the transparency of allocation being made at random. It is better than the usual allocation by patronage.

This article first published in Third Sector.

The day after I filed it, somebody at my tennis club happened to recount the uproar when, one year, the club’s batch of Wimbledon tickets were given to the chair-person, the ladies doubles captain, the men’s doubles captain, etc. It only subsided when those tickets were recalled and re-allocated via a ballot in the bar. It seems that we all appreciate the transparency of random allocation when valuable resources are scarce.

Posted in Uncategorized | Leave a comment

Deworming: problems under re-analysis

A flawed study on deworming children—and new studies that expose its errors—reveal why activists and philanthropists alike need safeguards.

The book Zen and the Art of Motorcycle Maintenance, of all things, offers a critically important message for people who work in development and philanthropy. “The real purpose of the scientific method is to make sure nature hasn’t misled you into thinking you know something you actually don’t know.”

Three new papers published today confirm this, by illustrating just how easily we can be misled by what we think we know, and just how much the power of the scientific method can safeguard us from continuing to be misled (and potentially investing significant time and effort on the wrong priorities). That’s because the three papers raise important questions about the practice of treating children for intestinal worms, which, in recent years, has become a darling of international development.

Deworming Programs Have Been “In”

Here’s the back-story. Worms infect people through contact with infected faeces. They live in people’s bodies (they can be a metre long!), eat their food, deprive them of nutrients, and make them lethargic and ill. And in 1999, two US economists, conducting a study in Western Kenya, found that “deworming” a number of school children improved their nutritional intake, reduced their incidence of anemia, and—by making them less ill and lethargic—increased their attendance at school and hence improved their exam results. The economists also claimed that attendance at schools where children did not receive treatment also increased—by 7.5 percent—because those children, living in the same area as the children who were treated, were not infected by worm eggs in feces in soil near their homes. (There are two main types of worms: soil-transmitted worms, and water-transmitted worms known as schistosomiasis or bilharzia. The Kenyan study was mainly of soil-transmitted worms but did pick up some schistosomiasis.)

Consequently, the Copenhagen Consensus made deworming one of its top recommendations. GiveWell named two organizations that focus on deworming in the top four on its list. And development economist Michael Kremer, a co-author of the 1999 Kenyan study, started an initiative called Deworm the World, which has treated 37 million children in four countries to date.

The Scientific Method

Now, the scientific method involves several safeguards against being misled. One is isolating variables to reveal which one(s) matter. Maybe the speed with which a dropped object hits the ground depends on the height from which it’s dropped and gender of the person who drops it. So we experiment by having people of both genders drop identical objects from the same height, thus “isolating” gender as a variable and, when the objects hit the ground at the same time, showing that it doesn’t matter.

Another safeguard addresses bias by replicating an experiment elsewhere, and comparing and combining the answers. If we open our back-to-work program only to motivated people, then we don’t know whether their success getting jobs is due to the program (a “treatment effect”) or the unusual characteristics of the people we chose (a “selection effect”). The latter would create a selection bias. If we interview only the people in the program who stick it out to the end, we don’t hear from the people who quit because it was so arduous, so our user-experience data may suffer from survivor bias. These and other biases mislead us into thinking we know things we actually don’t know. Single studies may also be biased because they may unwittingly involve particularly unusual people or take place under unusual circumstances. They may also simply get freak results by chance.

A third safeguard in the scientific method is repeating the analysis. In other words, checking the maths.

The three papers, now available, used the scientific method to great effect. The Cochrane Collaboration is a global network of medical researchers who do “systematic reviews” and “meta-analyses” (it may well have saved your life at some point). In 2012, the Cochrane Collaboration wrote: “It is probably misleading to justify contemporary deworming programmes based on evidence of consistent benefit on nutrition, haemoglobin, school attendance or school performance.” Recent correspondence with the authors implies that they’ve not changed their minds. And today, the Cochrane Collaboration publishes its fourth systematic review of mass deworming. The group looked at all 45 studies within its scope and concluded that: “There is now substantial evidence that this [mass deworming treatment] does not improve nutritional status, haemoglobin, cognition, or school performance.”

In two additional studies published today, the London School of Hygiene and Tropical Medicine (LSHTM) simply re-analyze the Kenyan data. They found, if you excuse the pun, a can of worms: errors, missing data, misinterpretation of probabilities, and a high risk of various biases. The effects are huge: The claimed effect on school attendance among untreated children seems entirely due to “calculation errors” and effectively disappeared on re-analysis; the claimed effect on anemia statistically did the same.

We shouldn’t be surprised: That people make mistakes is hardly news. What’s impressive is that somebody took this important step of re-analyzing the data, caught the errors, and prevented us being misled by them. As Yale’s Dean Karlan and I noted when the 2012 Cochrane worm study published, this is exactly how science is supposed to work.

The re-analysis papers raise three more subtle issues. First, the choice of analytical method matters (even if the data are complete and accurate). When looking at changes in school attendance, the economists used a method common in economics; the epidemiologists used a different method common in epidemiology and found that “the strength of evidence supporting the improvement was dependent on the analysis approach used”. There can only be one “correct” answer, and it’s not yet clear which method is misleading.

Second is how rare re-analyses are. Open data to enable post-publication review is sexy and funded and increasingly common. But actually doing post-publication review is hard. It’s hard to fund—so hats off to3ie who funded this one; it’s hard to do—the original authors sacrificed masses of time digging up old files for LSHTM to use; and it’s hard to get the results published—pre-publication peer review of LSHTM’s papers took about five months.

Third is just how different this is from most impact research in the social sector. This is often unreported, or reported unclearly or incompletely, and only rarely are the raw data made available to enable inspection. I’ve argued before that most charities shouldn’t do impact evaluations (as has Dean, separately)—eradicating misleading biases is just too hard for non-specialists. But when they do, they should publish the full details and data. The scientific method requires it.

This article first published in Stanford Social Innovation Review.

 

Posted in Effective giving, Great charities, Impact & evaluation, meta-research, transparency | Tagged , , | Leave a comment

What to when when you’re badly treated by a funder?

Jake Hayman was right in his recent blog Not Fit For Purpose: Why I’m Done With the Foundation World – there are major problems with charitable funding.

We can see this just from the fact that charities normally pay between 20p and 40p to raise £1, whereas companies pay between 3p and 5p. We can tell, too, from the remarkable unpopularity of many grant-makers in comparison to most people who hand out money.

But what do you do about it? This isn’t a rhetorical question: I’m asking for actual examples. What have you – you! – done in the past when you’ve felt badly treated by a foundation? Do you write to the chief executive? To the chair? Rant on Twitter? Just bitch about them privately? And what happened as a result? If we collectively had more stories and examples (evidence, of a sort) about what works and what doesn’t in terms of influencing donor behaviour, perhaps we could solve much of this. Continue reading

Posted in Uncategorized | Leave a comment

A welcome public row about donor effectiveness

Well done Malcolm Gladwell. On Wednesday this week, Harvard announced its biggest gift ever, $400m from the American hedge fund manager John Paulson for its school of engineering and applied sciences. Gladwell ridiculed it: ‘It came down to helping the poor or giving the world’s richest university $400 mil it doesn’t need. Wise choice John!’ Various other financial overlords sprang to Paulson’s defence: ‘My first thought was: ‘Wait a minute, pal, how much have you given?’’ said one; ‘Would they criticize him if he just sat on his wealth and ‘compounded it’ like certain others?’ said another; and a third said ‘Who the f— can criticize a guy who donated $400 million to his alma mater?!”… What’s to criticize? Extremely generous and he is to be applauded.’

Opportunity cost, that’s what to – well not criticize but to question – and effectiveness along with it. Charities vary wildly in how effective they are: with the same amount of resource, some achieve results, some achieve nothing, some achieve masses, some make things worse. The choices which donors make – like the one Gladwell is calling out – are highly consequential. Continue reading

Posted in Uncategorized | Leave a comment

Is grantee / beneficiary feedback a substitute for RCTs?

The short answer is no. At first sight, it seems that randomized controlled trials (RCTs) and Constituent Voice (CV: a good way of gathering feedback from programme beneficiaries or grantees) could substitute for each other because they both seek to ascertain a programme’s effect. In fact they’re not interchangeable at all. An RCT is an experimental design, a way of isolating the variable of interest, whereas CV is a ‘ruler’ – a way of gathering information that might be used in an experiment or in other ways. Continue reading

Posted in Effective giving, Impact & evaluation, meta-research | Leave a comment

Do gongs from HM Queen make any difference?

It’s June, which brings the Queen’s official birthday, and perhaps this year you – like many charity sector people before you – will get lucky and appear in the Birthday Honours list. If so, arise, Sir or Dame Reader, for I have an important task for you.

This auspicious occasion presents an opportunity to find out whether Her Majesty’s gongs actually make any difference. We currently don’t know, despite all the sound and fury about them.  Continue reading

Posted in Uncategorized | Leave a comment

Why I support AllTrials & suggest that you do too

This article was first published by The Life You Can Save.

Alessandro Liberato was suffering from multiple myeloma and trying to decide whether to go through the trauma – for the second time – of a bone marrow transplant.

There were four [clinical] trials that might have answered my questions, but I was forced to make my decision without knowing the results because, although the trials had been completed, they had not been published,” he said.

Alessandro’s predicament isn’t unique. Millions of patients like Liberato and their doctors are avoidably in the dark. Amazingly, fully half of all clinical trials are unpublished.

As a result, the effects of most medicines are effectively unknown,” says Dr. Ben Goldacre, who has studied the problem of why clinical trials often go unpublished. Continue reading

Posted in Uncategorized | Leave a comment

Helping mainstream donors to give better

If you want to give to, say, cancer and want to find a good charity in that, how can you currently find out which org is any good? Essentially you can’t: charity ‘due diligence’ is way too hard for almost any non-professional donor.
It matters since most £s given are given by ‘normal people’ (for whom philanthropy isn’t a job) and those people are the majority of donors. The pattern is the same in most developed countries. Those donors really don’t have much option but to give randomly or based on hear-say. 
We’ve thought long about fixing this, and are now moving to action. Our ‘strategy’ is to borrow other people’s homework: create & market a website which compiles the recommendations of (charities funded by) sensible grant-makers, & of independent analysts.
A brief paper outlining the concept is here. It’s very early days but you’ll get the drift. We’re very interested in your views: please send them to admin [at] giving-evidence [dot] com stating your location, experience and day rate.
We’re looking for a freelancer with experience of market research as part of new product development (NPD) to help in these early stages. Ideally they’d have done some NPD and be familiar with human centred-design / rapid prototyping. They can be anywhere in the UK. If that’s you, please get in touch.
Posted in Uncategorized | Leave a comment

The key barriers to strategic philanthropy are practical

This was published by Stanford Social Innovation Review in a series about strategic philanthropy.

Encouraging more strategic philanthropy is a behavior change exercise. Paul Brest and I are fellow travellers and co-conspirators in that mission. But his article implies that he and I see different barriers to achieving that change. (We may of course both be right.) Brest lays out the objections to strategic philanthropy and refutes them—and does so excellently. By contrast, the barriers which I see and encounter are primarily practical. 

To change donor behavior, we can usefully learn from the patron saint of “nudging,” University of Chicago Professor Richard Thaler, who first deployed behavioral insights in economics. He has developed two ‘mantras’ while overseeing ‘nudge units’ in various governments globally:

  • “You can’t make evidence-based policy decisions without evidence.”
  • “If you want to encourage some activity, make it easy.”

Strategic philanthropy comes out badly on both mantras: we have barely any evidence about either how to do it or the location or extent of most of the problems it might tackle; and (not unrelatedly) strategic philanthropy is not easy to do. Continue reading

Posted in Effective giving, Impact & evaluation, Uncategorized | Leave a comment

Behavioural insights are rocket-fuel for charities

Few people can claim that their work has been used routinely to inform or improve fundraising, reproductive health, the governance of African countries or road safety, or to help people to get jobs or quit smoking; but the US economist Richard Thaler can. He has the rare distinction of having revolutionised a major discipline, and in his new book, Misbehaving: the Making of Behavioral Economics, he recounts how he did it.

Thaler realised that much of what economics says about how people behave conflicts with how we actually behave. Predictions which collide with observation are bad news in science. He suspected that economics would make better predictions if it absorbed insights from experimental psychology. This resulted in the new discipline of behavioural economics, which has since become mainstream.

Behavioural insights become rocket fuel when they are applied to social and development problems, and to public policy. They are useful to charities in at least three ways. Continue reading

Posted in Effective giving, Fundraising, Impact & evaluation | Leave a comment