Funders’ reporting and evaluation systems are rarely loved: they are more often regarded as compliance or ‘policing’. But not so for the Inter-American Foundation apparently: IAF received better feedback from its grantees on its reporting and evaluation system than have the ~300 other foundations analysed by the Center for Effective Philanthropy (CEP). Its anonymous survey of grantees includes the question:
“How helpful was participating in the foundation’s reporting/evaluation process in strengthening the organization/program funded by the grant?”
In both 2011 and 2014, IAF got the highest scores CEP has seen for this question. Furthermore, IAF comes top on this metric by some margin. Respondents can answer from 1 (“not at all helpful”) to 7 (“extremely helpful”), and in 2014, IAF scored 6.00; the funders that rated second and third on this question scored 5.80 and 5.72.
Giving Evidence is interested in charitable giving based on sound evidence, so we investigated.
As explained in an earlier post, IAF’s model is unusual in being more engaged with grantees than most funders are, visiting each several times and providing an evaluator’ who visits biannually to help set up the data system and verify the data which grantees submit. IAF also has an unusually broad set of metrics, and allows grantees choice in which ones they use.
Headline finding: The reporting system is part of the intervention
The main finding from Giving Evidence’s research is that the reporting and evaluation system is part of the intervention. Whilst that may sound obvious, it’s not how funders typically conceptualise reporting and evaluation: more often ‘the funder’s intervention’ is the money and maybe some other support, and the reporting and evaluation are separate from that, and to understand the effect of that.
What grantees talk about when they talk about IAF’s reporting process
We needed to establish what grantees were referring to when they gave those high ratings in the GPR, so asked interviewees to list its components. Herein was the first surprise: the financial report and the financial audit were among the items most frequently listed. They’re not normally even considered part of reporting and evaluation.
No. grantees who listed each component of IAF’s reporting process:
What grantees value
In many ways, IAF’s reporting and evaluation system functions like a capacity-building programme. It seems to give grantees four main benefits:
- Data: many of the grantees said that they didn’t have a data management system at all before IAF’s involvement, and that the requirement to report, and help getting a sensible data collection system, meant that they are now able to make data-informed decisions. “Before, we would have gone without collecting these data. We did not think it was important. But today, yes, we would do it independently of a funder’s requirement.” All the grantees which Giving Evidence interviewed (nine) rated its value to them as at least 7 out of 10.
- Capacity: grantees learn to collect, handle, interpret, present and use data. This is particularly important for the organizations with least developed skills in management and analysis, and who have not previously collected data at all.
- Confidence / courage: in their ability to collect data, and that their data are accurate and complete. Some grantees find this useful in their dealings with other organizations, such as other funders. This seems to be why financial audit is the component of the reporting and evaluation system most highly valued by grantees.
- Credibility: with their beneficiaries / communities, and with other organizations. Terms like ‘accountability’ and ‘transparency’ were used frequently.
These benefits are most prized by grantees which are earlier on the learning curve. We played a ‘game’ with interviewees in which they were given notional money which they could either spend on the various parts of the reporting and evaluation process or could keep for programmes: most (6 of 9) would rather have the reporting and evaluation process than the equivalent money for their programmes. Given the unpopularity of the most funders’ reporting processes, this seemed remarkably high.
The reporting framework and metrics themselves did not prove terribly popular or influential, but the evaluators’ visits were so, being integral to building grantees’ knowledge and skills:
Her role is not to find errors. She is here to help us grow stronger and improve.
The recommendations made by the IAF, we also apply them to all our projects. It helps us improve our administrative systems. When the project ends, we will continue with these practices…
The evaluator’s … observations and criticisms… are useful for us to improve and see things that we don’t see on our own.
Implications for IAF and other funders
A high-touch reporting and evaluation process may be useful when dealing with small grassroots organizations, as IAF does. Some grassroots organizations reported being so unskilled with data – and showed themselves to be in our numerical exercises – that we would question the accuracy, meaning or usefulness of data they report to funders if they are not given support. Conversely, organizations which are more sophisticated and already further up the learning curve gain less from a high-touch process; some may need less support and some may need none. It may be wise to segment grantees with respect to the extent and type of support they need.
The full case study is published here. IAF and Giving Evidence hope that this is useful to other funders, and also encourages other funders to share data about their own performance and findings in order to improve the practice of grant-making across the field.
This is part of our work on generating evidence about ways of giving. More here—>