Goldman Sachs doesn’t (appear to) understand stats. Who are the muppets now?

This article first appeared in Third Sector magazine. 

The legendary investment bank Goldman Sachs was described by Rolling Stone magazine two years ago as being “like a great vampire squid wrapped around the face of humanity”; and a former executive who resigned very publicly last month via a New York Times article, revealed that some of it staff refer contemptuously to clients as “muppets”.

But what about its  corporate philanthropy? Well, it has run an astonishingly effective philanthropic initiative on immunisation – and made, in another project, some spectacularly spurious claims about charitable impact.

Masters of the universe

First, the immunisation. Great corporate philanthropy involves using a company’s unique resources to create benefits that only that company could have generated. Goldman Sachs provided a small group of bankers on a pro bono basis to create the innovative International Finance Facility for Immunisation.

This takes numerous governments’ financial commitments to health and, by issuing bonds secured against them, makes the money available up front. This enables better planning, which accelerates research and development, reduces vaccine prices and speeds delivery. Vaccinations enabled by the bond are expected to have immunised half a billion children, fully three million of whose lives are thought to have been saved by the IFFIm alone.

Mahna mahna

But then, by contrast, there’s the Goldman Sachs programme called 10,000 Women, which supports female entrepreneurs. It takes out full-page adverts in magazines to share some data that it presumably thinks should impress us: “70 per cent of [the programme’s] graduates surveyed have increased their revenues, and 50 per cent have added new jobs.”

In my view, this is the worst type of ‘impact reporting’, because it tells us precisely nothing.

To understand charities’ impact we need to answer two questions: first, what happened?; and second, how is that different from what would have happened anyway? The data that Goldman Sachs gives here fails to answer either question. It’s an error common to many charities’ impact reports.

What happened? The data doesn’t even show whether the performance of the women on the programme improved. Perhaps they were doing just the same beforehand – perhaps they were doing better before and the programme dulled their skills. At the very minimum, charities should report not just ‘after’ data (as Goldman Sachs is doing here) but ‘before’ and ‘after data’, so we get some sense of the change.

What would have happened otherwise? Even if those women’s performance has improved, has it improved more than it would have done otherwise? Again, we’ve no idea because there’s no control or comparator. Perhaps all businesses have grown that much – or perhaps others that didn’t do the programme have grown more. A better statement would be “70 per cent of graduates increased their revenues, whereas only 20 per cent of other businesses did in the same period”. The “other businesses” here would be acting as a control group. This is very basic statistics.

In fact, the control group in the 70 per cent/20 per cent example still wouldn’t prove much. It’s not hard to imagine that the kind of women who get themselves onto a Goldman Sachs programme are just the kind of go-getters who would do well in virtually any circumstance. This selection bias means that we don’t know whether the results are due to the programme itself or to systematically unusual characteristics in the women it selects.

The only way to control for selection bias is an experiment in which a researcher takes a large enough set of female entrepreneurs who are eligible for the programme and randomly assigns them either to do the programme or to not do it. Then it compares what happens to the two groups’ performance over time. This latter set is a control group, which will – if the experiment is done right – show what the women who did the programme would have achieved otherwise. Voila.

Selection bias is also common in charities’ impact reports. Randomised control trials of the type described are, happily, increasingly commonly used to figure out what is really working: the Education Endowment Fund is using them in UK education, and both Innovations for Poverty Action and J-PAL (the Abdul Jameel Poverty Action Lab) use them for alleviating extreme poverty.

Selection bias, controls and randomisation are standard tools in the statistician’s box. The masters of the universe should make better use of them.

This entry was posted in Corporate philanthropy, Impact & evaluation and tagged , , , , , , , , , , , , , . Bookmark the permalink.

5 Responses to Goldman Sachs doesn’t (appear to) understand stats. Who are the muppets now?

  1. Pingback: Goldman > MF Global | Pearltrees

  2. Pingback: It Ain’t What You Give, It’s the Way That You Give It: Q & A with Caroline Fiennes « High Impact Philanthropy

  3. Pingback: Why I’m delighted to be working with Innovations for Poverty Action | Giving Evidence

  4. Pingback: Why I’m delighted to be working with Innovations for Poverty Action | Caroline Fiennes @carolinefiennes

  5. Pingback: Development Socks | Randomistas: Part I

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s