This article first published in the Financial Times.
Break out the champagne. Somebody’s finally done it. I’ve been saying for a while that funders should investigate empirically whether their “help” for non-profit organisations actually does help. It is not guaranteed: some funders create so much work for non-profits that their “support” is in fact a net drain.
GlobalGiving has been called the “eBay of international development”: a website which lists vetted non-profits, improving their visibility to prospective donors, and also offers them training of various types. A quasi-funder, it has recently investigated whether and how its support helps non-profits, and published the results. It is a prospective study and, to my knowledge, the first ever. Let’s hope that many other donors follow suit.
To see GlobalGiving’s effect, the study needed to compare the performance of recipient organisations with that of non-profits that are not listed on the site. But it couldn’t use as comparators just any old non-profits, because perhaps the non-profits which apply to GlobalGiving and qualify for funding might be systematically different from those that don’t. Perhaps it is only approached by non-profits which are the most organised or determined (or foolhardy).
So they compared the performance improvements among a group of organisations which were accepted on to on GlobalGiving’s platform with a group which was also accepted but didn’t complete the onboarding process for some reason.
GlobalGiving measured performance using an existing tool called Organisational Performance Index. This assesses eight factors, including how well the organisation uses data, heeds feedback, implements its work, meets industry standards, and sets and measures goals.
The analysis found that GlobalGiving’s grantees outperformed the others on one factor — improving their participatory planning and decision-making, and using feedback. There was no detectable change on any other dimension. GlobalGiving would not have predicted this result.
Our first reaction should be to respect GlobalGiving’s humility in asking these questions at all. I have spoken to dozens of foundations in various countries, trying to persuade them to investigate their effectiveness. Many could. Their numerous objections are impressively ingenious. It feels remarkably as though they don’t want to risk discovering that they don’t help.
To evaluate itself, ideally a funder would take the set of non-profits which qualify for its support (such as the applicants that pass its screening), divide them randomly into a group which receives the support and a group which goes without, and compare the performance of the two groups. This is a randomised controlled trial.
Funders often claim that such studies would be unethical. That’s garbage: most funders have more eligible applicants than they can fund, so they have to ration their support somehow, and doing it randomly is at least fair. The ethical argument assumes that withholding a funder’s support is detrimental, but seeing as none of them has ever established that it is beneficial, this is specious. Plus has anybody asked non-profits whether they’d mind being in a trial like this? I rather doubt it.
Second, we should applaud GlobalGiving’s boldness in publishing this. I know of a major funder which has done somewhat similar analysis (though retrospective) but has chickened out of releasing it. GlobalGiving raises its own money, so airing its dirty laundry in public is riskier than for endowed funders that do not compete for resources.
Third, the detail of GlobalGiving’s research report illustrates various important points about how to write up research. This is important because many charities produce research about their impact that is so vague it’s completely useless.
For instance, in my work advising a corporate donor recently, I looked at a UK charity whose “impact statistics” included statements such as “89 per cent reported feeling more self-confident as a result of attending [the site]”. Well, 89 per cent of whom? If you ask everyone who comes through the door you can expect one answer, and another if you ask only the regular visitors. And self-confidence as measured how, and over what period?
As a donor, if you want to understand what a charity is really achieving, you must understand what good research looks like. Truth is easily concealed.
Medical research, which is more sophisticated than most, has checklists of the details to be included in research reports, precisely so that clinicians can see whether the research is robust and applies to their particular patient.
For example, for randomised controlled trials, the checklist is called Consort (Consolidated Standards of Reporting Trials) and it requests details such as how the participants were chosen and how they were randomised. This is because you can rig both of those. Where research reports omit those details, the claimed results are — surprise! — more impressive than when they don’t.
Other research methods more commonly used by charities, such as case studies and observational studies, also have checklists. Many charity studies would score low on them. The GlobalGiving study would score rather well and shows that this isn’t difficult. Its report explains why it chose those participants, how they were recruited into the study and why some dropped out, and hence why it ended up with the sample size that it did. This is akin to saying who the 89 per cent were. It explains its choice of measurement instrument, precisely how data were collected, and how they were analysed. Any competent study can report this — and should.
GlobalGiving is unusual in being able to do this research itself. Few funders or charities have those skills, but they could hire external research experts or invite academics to study them.
One swallow doesn’t make a summer. But one swallow does prove that it is possible to be a swallow: GlobalGiving’s study shows that funders can rigorously analyse their own impact. More should.