Giving Evidence’s research and analysis is to encourage and enable giving based on sound evidence. It covers both the major decisions that donors face:
- What to fund, i.e., the (relative) effectiveness of various interventions, organisations and new research that they might fund, and
- How to fund, i.e., the way that the giving is organised, e.g., a few large grants vs. many small ones, grants vs. loans, how to choose (e.g., staff choose, trustees choose, an algorithm chooses), what reporting is requested, and how donors assess their own effectiveness.
All our published research reports are here. Our research and analysis includes:
- Producing new primary research (though we do not tend to do evaluations of interventions, mainly because, unlike most of this terrain, it is well-covered by others).
- Synthesising existing evidence
- Identifying gaps in existing evidence and particularly priority gaps for new research
- Identifying demand / need for new research
- Looking at how information / research (incl. monitoring and evaluation) are stored and how findable they are: the information architecture
- Analysing the research that charities and donors produce, e.g., for its quality.
Below are some examples of our research and analysis: some quick and simple, some complex:
Producing analysis of which suggests that most operational charities should not produce evaluations
This is a quick analysis (published here) of data published by the UK Ministry of Justice Data Lab, evaluating programmes run by UK charities to reduce one-year re-offending.
It shows (a) that the effect of these programmes vary massively; and (b) many such programmes were too small to produce statistically significant results.
Producing the first (and only?) analysis of charities’ spend on admin / overhead vs their performance
Many many people think that a clever move is to choose charities that spend little on administration /overhead. Giving Evidence published the first ever analysis of how overhead spend correlates with charities’ performance.
Spoiler: contrary to popular belief, the relationship goes the other way: strong charities spend more on admin than weaker ones do. So, for donors, this ‘clever move’ is a bad move. Skimping on overheads is a false economy.
Producing primary research and analysis about how to give
We analysed all grants made by the Hong-Kong-based ADM Capital Foundation in its first ten years. We coded each grant according to how successful it was, and investigated whether that success seemed correlated to (i) characteristics of the grantee, e.g., location, size, or (ii) donor behaviour, e.g., size of grants, restrictions, extent of funder involvement.
The goal was to find what tends to make for successful grants (in that sector and countries) in order to guide future funding. Findings are here.
To our knowledge, this is only the second such analysis anywhere: the first was by the Shell Foundation.
Synthesising existing research about effectiveness of interventions in outdoor learning
Giving Evidence partnered with the EPPI Centre at University College London to produce a systematic review of all the relevant ‘what works’ studies about outdoor learning. This was requested by a foundation active in that sector, to inform its own funding, as well as the wider sector.
We looked at what existing research covers and doesn’t cover, their quality, and summarised what they they say. More about the method and the findings are here.
Identifying gaps in existing research about how to fund
We wrote a white paper with the University of Chicago Booth School of Business which looks at at (i)what good giving is, i.e., what donor behaviours produce the best outcomes, and (ii)how to persuade/enable/nudge donors to do those behaviours. It collates what is known on these topics, and lays out many unanswered questions which would form a strong research agenda. More here.
Synthesis and mapping of existing research about effectiveness of interventions in child abuse
A foundation which funds institutional responses to child abuse asked us to map all the existing research (globally) into the effectiveness of interventions there, and see what the evidence says. With various partners, we first produced an ‘evidence and gap map’ which shows what exists, its quality and the gaps. Second, we produced a ‘guidebook’ which summarises what the evidence says. More here.
Producing research to understand why one funder’s monitoring system is so popular with grantees
Funders’ reporting and evaluation systems are rarely loved. But not so for the Inter-American Foundation: it has received better feedback from its grantees about its reporting and evaluation system than have the ~300 other foundations analysed by the Center for Effective Philanthropy – twice.
We were interested to know why that system is so unusually popular, in order to see what other funders can learn. Giving Evidence did primary research with various grantees. In short, the answer is that (most) grantees perceive it not as policing or compliance but as building their capacity, case and credibility. More here.
Research into the ‘evidence systems‘ in medicine, and education in low- and middle-income countries
These two large studies look at the evidence systems: i.e., how & by whom evidence is produced, synthesised, disseminated, and used – and what aids and hinders each stage. The studies were motivated by the observation that evidence production and use are much more sophisticated in medicine than in education, so our goal was to identify lessons or insights for education. It was funded by two foundations active in education in LMICs. More here.
Investigating improving the architecture of research produced by UK criminal justice charities
We had noticed funders struggling to understand what charities’ “impact reports” really say because the research methods used are explained so badly (or not at all). Equally, it is hard to find charities’ reports: perhaps some NGO has discovered a great way to, say, prevent re-offending or improve literacy, but that nobody else knows about it so their genius innovation doesn’t spread.
Giving Evidence explored the feasbility of getting research by charities (including ‘monitoring and evaluation’) to be:
a) clearer, e.g., by creating a little checklist of items for charities’ research to detail: such as details of the intervention, the research question, the research method, and how the method was used. And
b) easier to find, e.g., by creating a repository to hold it.
Researching what research (incl. monitoring and evaluation) UK mental health charities produce and use
This study was in conjunction with a foundation active in mental health and wanting to make charities in it more effective by using and producing better evidence. We interviewed 12 UK charities about what the produce and use. The main finding was that they use very little formal scientific evidence, because they don’t realise that it exists and/or it is paywalled, and are often forced by donors to produce low-quality research (mainly monitoring and evaluation). This impedes operational work and is a waste of resources. More here.
Royal patronages of charities: what are they, who gets them, and do they help?
If you think about a Royal patron as a donor to a charity (which they sort-of are), these questions are about donor effectiveness. And if you care about charities’ effectiveness, this research can help, because it looks at whether the effort that charities put into securing and retaining / servicing Royal patrons is worthwhile.
In short, we could find no benefit of Royal patronages – despite multiple very complicated analyses. More here.
Could more charities and funders make their decisions in public? A few few already do: how many others do too?
All charities and charitable foundations exist to serve the public good. The 800-year-old City Bridge Trust lets anybody observe its decision-making meetings, and Global Giving UK has an AGM at which anybody can ask anything. Why don’t more?
Giving Evidence simply telephoned the 20 largest charities and foundations in each of the UK and the US and asked whether they ever have any meetings which the public can attend, and whether the public can ask questions. Of the 82 organisations we asked (an extra two crept into the sample), only two have any meetings in public; none allows the public to ask questions.
This is about accountability and transparency, to the people who provide subsidy and to the people the charities and foundations exist to serve. The research is here.