The charity sector should use more systematic reviews to leverage what’s already known

Any single piece of evaluation research, designed to understand the effect of an intervention, has limitations. It will examine the effect of a particular intervention on some particular outcomes in a particular group of people (‘population’), at a particular time. That’s fine, but it inevitably limits the value of the research for organisations using, say, the same intervention on a different population. Studies also vary in their robustness – the chance that their answer is wrong – and even a good study can get a weird result by fluke.

Better, then, to look at multiple impact studies when designing programmes or making funding decisions. This is what the Blagrave Trust, a foundation, recently asked my organisation, Giving Evidence, to do on the subject of outdoor learning, one of its funding areas; the report will be published next month. With University College London, we looked for every relevant study of outdoor learning published over the past 10 years. ‘Systematic reviews’ such as this enable people to stand on the shoulders of myriad giants – and see a long way. The first stage is deciding precisely what you’re interested in – location, types of intervention, populations, study designs and so on – and how you will find the material. That scope and method gets written up and published so that anybody can check it or repeat it.

Within the turf we defined, we found 15 systematic reviews of outdoor learning from various countries. Collectively, they cover a huge terrain: one of them analysed 150 studies, another 96 studies, another 58. Their combined insights are much less open to bias or irrelevance than any individual study, and they show the patterns much more clearly.

We also looked for primary evaluations (studies of people, rather than studies of studies) done in the UK, both in the academic literature and by “crowdsourcing” through the project’s steering group. We found 58. The smallness of the number surprised some people – there are several entire journals about outdoor learning, though a study of them found that only 11 per cent of the research they report assesses outcomes and effects. (That isn’t to deride the rest: they just had different purposes.)

Two features of the 58 UK primary evaluations are striking. First, they’re spread pretty thinly, across many interventions, age groups, outcomes and so on. That spread limits their reliability because any of them might be flukes. Second, they don’t score well for quality. We assessed them on the five-level scale that Project Oracle uses for evidence to underpin youth work in London. Only about half make it onto the scale at all, only one makes it to Level 3 and none gets higher than that.

A central tenet of medical research is to ‘ask an important question and answer it reliably’. Systematic reviews like this can show what’s reliably known already and guide future research by highlighting important remaining questions so that we can focus our finite research resources on answering them properly. The charity sector should use systematic reviews much more: they’re often much better than the standard approach of commissioning one new, fresh, but limited study.

Giving Evidence and UCL’s systematic review on outdoor learning is due to publish next month.

This article first published in Third Sector.

This entry was posted in Effective giving, Impact & evaluation. Bookmark the permalink.

2 Responses to The charity sector should use more systematic reviews to leverage what’s already known

  1. Pingback: Dear Santa, please bring us some curiosity! | Giving Evidence

  2. Pingback: We don’t know how to get donors to use more evidence to improve their giving | Giving Evidence

Leave a comment