Systematic review of evidence to inform funding practice: outdoor learning

What is known about what works and what doesn’t? What can we learn from the existing literature  and experience of other organisations about what works and what doesn’t – and for whom and in what circumstances – which can help us make better funding decisions?”

These questions prompted a study of outdoor education for 8-25 year olds commissioned by the Blagrave Trust, a family foundation which supports disadvantaged young people in southern England. Giving Evidence worked with the EPPI Centre (Evidence for Policy and Practice Information and Co-ordinating Centre) at UCL, which routinely does systematic reviews of literature in particular areas to inform decision-makers. The full report is here and a summary is here. The material is summarised in a book by some Swiss academics, published in 2022, here.

We were excited to do this project because those questions, and this OL report coverway of answering them, pertain to any area of charitable funding, service delivery or social policy. The charitable sector focuses a lot on doing monitoring and evaluation – i.e., producing research – but is weirdly less unconcerned about using research already created by others. We’ve written about this before, and will do so again: using systematic reviews of the existing literature could save a lot of time and money and significantly upgrade performance. There seems to be appetite in the outdoor learning sector to hear and heed the findings from research.

Study aims and logic

This particular study aimed to:

  • Categorise the various outdoor learning activities in the UK, in order to give funders a more coherent sense of the sector as a whole and see their options;
  • Identify the various outcomes which organisations running those various activities are measuring, i.e., what providers seem to be seeking to achieve; and
  • Assess the designs of individual evaluations and the standard of evidence offered in total for different types of outdoor learning. Some types of study are more reliable than others. Poor quality research normally means that the design doesn’t allow researchers to distinguish the effects of the programme from other factors and from chance.

Such a study can be very valuable. Most obviously it can guide providers and funders to the most effective interventions. It can also guide the research within sector, and hence reduce research waste. In many sectors, donors and operators collectively spend a lot on research but with no co-ordination about where it is spent. Often the spend goes where activity is the greatest. By contrast, it is better to identify areas which most need additional research. That depends in part on the ‘evaluate-ability’ of the various interventions, i.e., whether they are ready to be evaluated.

This is useful because often NGOs evaluate work before it is ready, or after the effect has already been reliably established elsewhere. Evaluation at the wrong time, along with bad evaluation, are major causes of research waste, which we hope to reduce.

The protocol for the review was published, and is here.

This entry was posted in Effective giving, Great charities, Impact & evaluation. Bookmark the permalink.

Leave a comment