This article first published in Third Sector.
There has been a huge rise in interest recently in the impact charities have, so it’s remarkable that only now are we seeing rigorous evidence emerging about whether donors actually care. It’s a mixed picture.
A paper published last year reported on an experiment with a US charity, Freedom From Hunger. It divided its donor list into two random groups. Those in one group received a conventional solicitation with an emotional appeal and a personal story of a beneficiary, with a final paragraph suggesting that FFH had helped that beneficiary. Those in the other group received a letter identical in all respects – except that the final paragraph stated (truthfully) that “rigorous scientific methodologies” had shown the positive impact of FFH’s work.
Donations were barely affected. The mention or omission of scientific rigour had no effect at all on whether someone donated. It also had only a tiny effect on the total amount raised. People who had supported that charity infrequently were not swayed. However, people who had previously given ‘a lot’ – more than $100 – were prompted by the material on effectiveness to increase their gifts by an average of $12.98 more than those in the control group. On the downside, people who had previously made frequent gifts of less than $100 became less likely to give and also shrank their average gifts by $0.81 – all told, the net effect was about nil. But on the upside, this implies that more serious donors will give more if they are presented with decent evidence of effectiveness.
A separate study in Kentucky looked at whether donors give more when there is an independent assessment of the charity’s quality. Donors were each approached about one charity from a list; each charity had been given a three or four-star rating (out of four) by the information company Charity Navigator. Half the donors were shown the rating; the other half weren’t. The presence of the ratings made no meaningful difference to their responses.
The third study has not yet been published, but is perhaps the most telling. It was a multi-arm, randomised, controlled test in which a large number of US donors each received appeals from one charity out of a set of charities that had various Charity Navigator ratings. Half of the appeals included the charity’s rating; the other half did not.
The overall effect of presenting the information was to reduce donations. Showing the ratings brought no more benefit to the high-rated charities than not showing them. For charities with a rating of less than four stars, showing the rating reduced donations; and the lower the rating, the more it reduced donations.
Donors appeared to use evidence of effectiveness as they would a hygiene factor: they seemed to expect all charities to have four-star ratings, and reduced donations when they were disappointed – but never increased them because they were never positively surprised.
Three swallows don’t make a summer, of course, so there’s much more to know about donor behaviour. Even if it transpires that donors really don’t care, our constituents do – hence, so must we.
Contribute to work with the University of Chicago to better understand donor behaviour —>
Interesting findings. Your interpretation sounds sensible – donors (and probably people in general) have an expectation that charities are effective. Only when they discover that charities are ineffective are they surprised (man bites dog is news, and vice versa is not) and their behavior changes.
One implication is that we need to do a better job educating the public that there are different levels of effectiveness and evidence and that just because something is a charity doesn’t mean that its programs achieve their desired results. In fact, we have little evidence that most things in the social service sector actually “work” – there is perhaps somewhat more evidence in other sectors, but still not enough. (E.g., diet supplements: they don’t work but people keep buying.)
However, I think the research also need to think about other kinds of ways of trying to convey the information about impact. Neither a single sentence about “rigorous scientific research” nor stars from Charity Navigator are all of ways you could talk about evidence and impact. In fact Charity Navigator’s stars are less based on impact than on financial analysis of the organization and its fundraising.
Perhaps the important research topic would be *HOW* to convey impact information in ways that people can understand and find valuable. I like outcome data regarding long term life benefits (% survival, % going to school, etc.), others might like dollars per quality adjusted life year, beneficiary satisfaction, and so on. Some might like to read details, others might like info graphics. You could also try to vary the credibility of the source. Obviously, there is very limited credibility in anything that an organization mails to you of its own accord – you can’t tell how accurately or selectively they may be using “evidence.”
Thanks Chris. Indeed.
Firstly, please know that we, with the University of Chicago, are looking at some of those questions about influencing donor behaviour. See http://www.giving-evidence.com/chicago
Second, just to clarify, Charity Navigator is significantly amending its algorithm. The new one isn’t yet fully rolled out, but is much better. See https://giving-evidence.com/2012/12/11/charity-navigator/
Pingback: How we could can impact measurement more useful | Giving Evidence