Article first published by the Society of Impact Assessment Analysts
In understanding a charity’s impact, we seek to identify the difference which the charity has made in the world. That is, what has happened which would not otherwise have happened. Though this may sound obvious, impact data rarely actually show it.
For example, imagine a city with poor air quality. A charity works there, trying to persuade drivers to turn off their engines when they’re idling at traffic lights. The charity reports that at the beginning of the year, the air was clean 10% of the time, whereas by the end of the year, it was clean 20% of the time.
Actually this indicates precisely nothing about whether the charity is doing a good job. Perhaps the improvement was due to the charity; but perhaps it would have happened anyway. Maybe engine technology is improving, or drivers are trying to save fuel because petrol prices are rising. Perhaps more improvement would have happened without the charity: annoying campaigns occasionally provoke people into doing precisely what the campaign is trying to curb.
At this stage, all we have from the air quality charity is ‘before & after data’. So we have an attribution problem. We know what happened but we have no idea why, and therefore we have no clue about the charity’s impact.
To determine the charity’s impact we need to ascertain three things:
1. What happened? In the air quality case, the change from 10% to 20% was pretty clear, but often identifying everything ‘what happened’ is pretty complicated.
2. How is that different from what would have happened anyway?
Since we’re normally allocating scarce resources, analysts usually also need to know:
3. How good are those results relative to other charities’ results?
To answer the second question, we need to understand the ‘counterfactual’: what would have happened anyway. That requires having a ‘control’ – that is, a situation in which everything is the same except the charity’s work. Setting up a control is sometimes easy, often tricky and occasionally impossible. How to do it is a big topic for another day.
Notice that ‘before & after data’ get nowhere near the second question. Yet charities often present results which are in fact just ‘before & after data’. For example, we hear statements such as: ‘awareness of HIV transmission is much higher than when we started’, or ‘following our campaign, the law was changed’. This is no better than saying that ‘before our work, the average height of a child was 1.2 metres whereas afterwards it was 1.3 metres’!
We need to watch out because ‘before & after’ data can be impressively complex or detailed: ‘HIV transmission rates are now 14% in the villages in which we work, whereas they were 20% a year ago’ or even ‘every time we go into a village, the transmission rate drops, and every time we leave, it rises again’, or ‘We use the “Complicated tool” to measure a randomly-chosen sample of 10% of our patients, and we find that in 95% of cases we get a drop in xyz behaviour, with a 15% margin of error and 23% standard deviation’. But complexity and detail are no proof of rigour.
Before & after data, on their own, are not useful. Rather, we need to ensure that data about ostensible results show both what happened and how that differs from what would have happened anyway – because only then can we see whether anything is actually being achieved.