Empowering the poorest of the poor

This article first published in the Financial Times

The global ‘gig economy’ is awash with the downtrodden and effective campaigners

The council wanted them out. The Grand Parade area in front of Cape Town’s City Hall needed to be clear for filming one day last month, so the market traders who have their stalls there would need to disappear. There have been traders on that stretch of ground for hundreds of years and, for them, a day without trading is a day without income.

But they had taken a lesson from the previous month. In February, they were to be removed for the then-president Jacob Zuma’s “state of the nation” address, as the Grand Parade area might have been needed for a helicopter landing.

In the end the whole plan changed because of Mr Zuma’s resignation. By March, though, the Grand Parade United Traders Association and its member stallholders knew about their legal rights, and, by showing that the council had not followed “due process”, had the clearance halted. Continue reading

Posted in Uncategorized | Leave a comment

Hoorah: A funder rigorously assessed its own performance

This article first published in the Financial Times.

Break out the champagne. Somebody’s finally done it. I’ve been saying for a while that funders should investigate empirically whether their “help” for non-profit organisations actually does help. It is not guaranteed: some funders create so much work for non-profits that their “support” is in fact a net drain.

GlobalGiving has been called the “eBay of international development”: a website which lists vetted non-profits, improving their visibility to prospective donors, and also offers them training of various types. A quasi-funder, it has recently investigated whether and how its support helps non-profits, and published the results. It is a prospective study and, to my knowledge, the first ever. Let’s hope that many other donors follow suit.

To see GlobalGiving’s effect, the study needed to compare the performance of recipient organisations with that of non-profits that are not listed on the site. But it couldn’t use as comparators just any old non-profits, because perhaps the non-profits which apply to GlobalGiving and qualify for funding might be systematically different from those that don’t. Perhaps it is only approached by non-profits which are the most organised or determined (or foolhardy).

So they compared the performance improvements among a group of organisations which were accepted on to on GlobalGiving’s platform with a group which was also accepted but didn’t complete the onboarding process for some reason.

GlobalGiving measured performance using an existing tool called Organisational Performance Index. This assesses eight factors, including how well the organisation uses data, heeds feedback, implements its work, meets industry standards, and sets and measures goals.

The analysis found that GlobalGiving’s grantees outperformed the others on one factor — improving their participatory planning and decision-making, and using feedback. There was no detectable change on any other dimension. GlobalGiving would not have predicted this result.

Our first reaction should be to respect GlobalGiving’s humility in asking these questions at all. I have spoken to dozens of foundations in various countries, trying to persuade them to investigate their effectiveness. Many could. Their numerous objections are impressively ingenious. It feels remarkably as though they don’t want to risk discovering that they don’t help.

To evaluate itself, ideally a funder would take the set of non-profits which qualify for its support (such as the applicants that pass its screening), divide them randomly into a group which receives the support and a group which goes without, and compare the performance of the two groups. This is a randomised controlled trial.

Funders often claim that such studies would be unethical. That’s garbage: most funders have more eligible applicants than they can fund, so they have to ration their support somehow, and doing it randomly is at least fair. The ethical argument assumes that withholding a funder’s support is detrimental, but seeing as none of them has ever established that it is beneficial, this is specious. Plus has anybody asked non-profits whether they’d mind being in a trial like this? I rather doubt it.

Second, we should applaud GlobalGiving’s boldness in publishing this. I know of a major funder which has done somewhat similar analysis (though retrospective) but has chickened out of releasing it. GlobalGiving raises its own money, so airing its dirty laundry in public is riskier than for endowed funders that do not compete for resources.

Third, the detail of GlobalGiving’s research report illustrates various important points about how to write up research. This is important because many charities produce research about their impact that is so vague it’s completely useless.

For instance, in my work advising a corporate donor recently, I looked at a UK charity whose “impact statistics” included statements such as “89 per cent reported feeling more self-confident as a result of attending [the site]”. Well, 89 per cent of whom? If you ask everyone who comes through the door you can expect one answer, and another if you ask only the regular visitors. And self-confidence as measured how, and over what period?

As a donor, if you want to understand what a charity is really achieving, you must understand what good research looks like. Truth is easily concealed.

Medical research, which is more sophisticated than most, has checklists of the details to be included in research reports, precisely so that clinicians can see whether the research is robust and applies to their particular patient.

For example, for randomised controlled trials, the checklist is called Consort (Consolidated Standards of Reporting Trials) and it requests details such as how the participants were chosen and how they were randomised. This is because you can rig both of those. Where research reports omit those details, the claimed results are — surprise! — more impressive than when they don’t.

Other research methods more commonly used by charities, such as case studies and observational studies, also have checklists. Many charity studies would score low on them. The GlobalGiving study would score rather well and shows that this isn’t difficult. Its report explains why it chose those participants, how they were recruited into the study and why some dropped out, and hence why it ended up with the sample size that it did. This is akin to saying who the 89 per cent were. It explains its choice of measurement instrument, precisely how data were collected, and how they were analysed. Any competent study can report this — and should.

GlobalGiving is unusual in being able to do this research itself. Few funders or charities have those skills, but they could hire external research experts or invite academics to study them.

One swallow doesn’t make a summer. But one swallow does prove that it is possible to be a swallow: GlobalGiving’s study shows that funders can rigorously analyse their own impact. More should.

Posted in Uncategorized | Leave a comment

Introduction to monitoring and evaluation

With Keystone Accountability, we recently worked for a funder who was relatively new to monitoring / evaluation / learning. We created for them a ‘primer’ to introduce some of the key concepts a ‘primer’ to introduce some of the key concepts, and are publishing it because we feel, and hope, that the material is useful for a wider audience of funders and implementers.

It is designed to explain what monitoring is, and what evaluation is, and how they differ. We structured our thinking into a four-level framework. This simply splits out the various questions about monitoring and evaluation (note that, as the document explains, monitoring and evaluation are two completely different things, even though they are often conflated):

  • Level 1: dimensions of the grant; inputs (such as grant size) and grantee activities
  • Level 2: tracking changes around the grantee, e.g., increase in number of jobs, change in grantee partner revenue, number of workshops run
  • Level 3: evaluating grantees: i.e., establishing what of those changes result from (i.e., are attributable to) the grantee partner
  • Level 4: evaluating a funder: i.e., establishing what of those changes result from (i.e., are attributable) to the funder

We present these four levels as a ladder, because the issues at Level 1 are simpler than those at Level 2, and so on, both in terms of the types of data / analysis needed and the conceptual complexity.

Download the primer here.Why charities should do much less evaluation–>

Various types of monitoring, evaluation and other information that funders need (2nd half of this talk) –>

Posted in Uncategorized | Leave a comment

Perils and pitfalls of evidence-based philanthropy

Keynote talk and very spirited panel at the Philanthropic Foundations of Canada, Toronto, Oct 2018.

Notice the all-female panel 🙂

(We will eventually cut the slides into this so you can see what Caroline was talking about. For now, you can download them here and follow what’s going on.)


Posted in Uncategorized | Leave a comment

Getting research into practice: Keynote at Global Evidence and Implementation Summit, 2018

Caroline Fiennes gave a keynote presentation at the Global Evidence and Implementation Summit in Melbourne, October 2018. To watch, click on the photo and wait a second. You may need to log in – any email address is fine. Excuse the didgeridoo interruption!

Speaking (2)

Other talks from the conference are here.



Posted in Uncategorized | Leave a comment

With all due diligence: Claims made by some ‘impact investments’ do not stack up

Demanding a financial return often reduces the social benefit rather more often than impact investors let on

A version of this article first published in the Financial Times.

It is a beguiling offer — an investment that can produce a financial return and also a social / environmental benefit. Private benefit plus public benefit. It probably does happen sometimes but certainly not for every impact investment product. Investors must be on their guard.

The impact of something is the difference between the world in which that thing happens versus the world in which it does not. So assessing impact means considering what would have happened without it.

This is how to assess the impact of anything. For example what would have become of you if you had not been educated? What would have happened to Europe without the reparations demanded from Germany after World War I?

Establishing what would have happened otherwise, the counterfactual, is sometimes impossible as in the reparations example. In those cases we have to make reasonable conjectures based on everything else we know. Sometimes it is possible – though it may be complicated. This is a whole branch of social science. Continue reading

Posted in Uncategorized | Leave a comment

Foundation boards are a throwback to a ‘male, pale and stale’ world

Lack of diversity is a problem for foundations and grant-making committees

This article first published in the Financial Times.

Every donor who sets up a charitable foundation needs a board. And every company  starting a charitable programme needs to determine who will make the decisions about what it does and whom it supports.

There seems to be a major problem with these boards. In the UK, 99 per cent of foundation board members are white, according to data published this summer by the Association of Charitable Foundations. Only three per cent of foundation trustees are under 45 years old. Sixty per cent are retired. Two-thirds are male. They are “drawn from a narrow cross-section of society: white British, older and above average income,” the report says.

When a friend of mine began running a foundation outside Europe, she discovered that several people listed as trustees were, in fact, dead. The dead are weirdly important in philanthropy – for example, they are major donors – but they’re not meant to be making decisions. Continue reading

Posted in Uncategorized | Leave a comment

Charity begins with admitting we got it wrong

We need to be scientific and fearless about assessing whether proposed solutions work

This article first published in the FT

The Bill & Melinda Gates Foundation seems to have wasted six years and $1bn, having initiated a programme to improve teaching effectiveness in US schools. An evaluation released last month showed that it had a negligible effect on its goals — some of them worsened — which included student achievement, access to effective teaching and dropout rates.

Much the same happened to Ark, the UK-based charity founded by the hedge fund industry. It created and co-funded a programme in 25,000 schools in the Indian state of Madhya Pradesh, supporting the government to improve school performance. It was based on sound research about how to reduce teacher absence, improve teaching and create more accountability through school inspections. Ark has worked on it since 2012. Continue reading

Posted in Uncategorized | Leave a comment

Charities (gasp!) using and producing sound evidence

This article first published in the Financial Times.

Anne Heller has done something that I had never previously seen in my 18 years in the non-profit sector. She identified a social problem, scoured academic literature to find a solution, and then set up a non-profit to implement it. That approach sounds jaw-droppingly obvious, but it is in fact very rare for a charity to design itself around existing evidence.

Ms Heller had worked for the city of New York when Michael Bloomberg was mayor, running shelters for homeless families. She noticed that about 10 per cent of the families who use the shelters returned repeatedly. In other words, the services which the shelters provided were not solving these people’s problems. Continue reading

Posted in Uncategorized | Leave a comment

Mapping the existing evidence about preventing child abuse in institutions

We are producing a map of the existing evidence about child abuse within organisations

New project! Giving Evidence is working to produce a map of the existing evidence (and gaps in it) about what is effective at reducing child maltreatment – particularly child sexual abuse – within organisations, such as youth clubs, sports clubs, churches etc. We’ll be working on it with the Centre for Evidence and Implementation, and Professor Aron Shlonsky of Monash University in Melbourne, who have undertaken considerable research around this topic. Continue reading

Posted in Uncategorized | Leave a comment