Measuring children’s safety in organisations: Evaluating the strengths and limitations of currently-used measures

            A new project will collate the measures used in studies of institutional responses to child abuse and show how both researchers and practitioners can use these measures, and the measures’ respective strengths and limitations. Article by Gabrielle Hunt.

A newly updated Evidence and Gap Map (EGM) collated rigorous ‘what works’ studies about institutional responses to child maltreatment: incl. preventing it and responding to it. Although the body of evidence is growing, there are still significant gaps.

The EGM includes randomised controlled trials, quasi-experimental design studies, and systematic reviews. Most studies on it examined interventions that aimed to raise children’s knowledge (e.g., about how to avoid being abused or how to report abuse) or include measures to assess improvements in their well-being. Though this is important for a well-rounded approach to prevention, it places much of the responsibility for safety on children rather than on the adults who care for them. Few studies directly measure the incidence of maltreatment, and only a few studies measured the attitudes or skills of adults working with children or the culture in institutions.

It is essential to determine whether the measures used in the studies actually measure children’s safety or incidence of abuse, and which measures are most useful for research and practice.

Professor Daryl Higgins and Gabrielle Hunt from the Institute of Child Protection Studies (ICPS) at Australian Catholic University will work on a 3-month project to collate details about all the outcome measures used in the studies on this EGM. ICPS is a nationally recognised centre for excellence in child, youth, and family welfare and is committed to collaborative approaches to translating knowledge into policy and practice. Professor Daryl Higgins has researched child abuse impacts and prevention, family violence, and family functioning for nearly 30 years. He has focused on public health approaches to protecting children and child-safe organisational strategies. Gabrielle’s PhD studies, which aim to understand the prevalence and prevention of child sexual abuse in faith-based settings, as well as understanding harmful sexual behaviour and peer sexual harassment across the population, are being supervised by Daryl.  

This project aims to understand how to apply these measures effectively and identify their strengths and limitations (psychometric properties). Our initial review has revealed inconsistencies in how surveys, questionnaires, or tools to measure ‘safety’ are applied in organisations. Many studies focus on ‘proxy’ measures, such as children’s knowledge, but these measures may not accurately predict better safety. We also plan to explore other tools that have yet to be used in causal or experimental research that may be useful to youth-serving organisations.

We hope that by exploring the studies on the EGM, we can provide new insights into the usefulness of the measures used. We would be delighted to hear from other researchers who have done work to mine data in an EGM or other similar reviews to share their insights and experience. We plan to publish a resource outlining our findings and identify measures that leaders, practitioners, researchers, and funders can use in their work.

If you have conducted a similar study, or would like to hear more, please get in touch.

Posted in Uncategorized | Leave a comment

Why the Fdn Practice Rating doesn’t assess the same foundations each year, and why that’s fine

The Foundation Practice Rating rates 100 UK charitable grant-making foundations each year on their practices on diversity, accountability and transparency. The set of foundations which we research and rate changes from each year. A couple of people have asked recently why we do that and whether that compromises the FPR’s rigour. This article is to explain.

Our sample

To be clear, the set of 100 ‘included foundations’, as we call them, each year is as follows[1]:  

  1.       The five largest charitable grant-making foundations by giving budget. 
  2.       All the foundations which fund the Foundation Practice Rating. (There are currently 13 of them. One is not a charity: the Joseph Rowntree Reform Trust.)
  3.       A random sample of: community foundations across the UK (as listed by UK Community Foundations, the membership body of community foundations), and the ~300 largest foundations in the UK (as listed in the ACF’s annual Foundations Giving Trends report).

The sample is organised so that it is stratified, i.e., a fifth is from the top quintile in terms of giving budget, a fifth from the second quintile etc. So, for example, if no foundation funding the FPR is in the 2nd quintile, then all 20 included foundations in that quintile would be chosen randomly; whereas if three foundations funding the FPR are in the 2nd quintile, then 17 foundations in that quintile are chosen randomly. Obviously, at least five ‘slots’ in the top quintile are filled non-randomly (by the five largest foundations), and some other ‘slots’ are filled by foundations funding the FPR, so in the top quintile, not all the ‘slots’ are filled randomly. The foundations funding the FPR vary considerably in size: they are not all at the top.

We re-make the sample each year. The FPR is not a panel study: we do not track the same participants over time. This is intentional.

Notice that our sample is 100 out of about 340 foundations*. Thus we include ~29% of the total set. (*Those are: the ~300 on the list in the ACF report, + about 35 community foundations, + a couple of foundations which fund FPR which are in neither of those.) 

Why do we change the sample each year?

Well, on the first part of our sample, the five largest foundations change around: in the three years that we have been doing this, eight foundations have appeared in the largest five at some point. Looking at the chart below, it would seem rather bizarre to continue to rate, say, BBC Children in Need – now the 11th largest foundation – just because it was in the largest five when the FPR happened to start. We always include the (then) five largest foundations because their practices dominate grant-seekers’ experiences, so it is important to reflect which foundations those large ones are at the time.

On the second part of our sample, the set of foundations funding FPR changes: in the first year, there were only 10 and now there are 13.

On the third part of our sample, the rationale is this. First, we are trying to get a representative picture of progress across the whole foundation sector. And second, part of the ‘intervention’ of FPR is foundations knowing that they might be included at any time. If some foundations knew that they would definitely be included, they would have an incentive to improve their practices in order to improve their grades, but other foundations would not feel that incentive so might not improve, or at least, not make so much effort to improve. Thus the random selection enables FPR to have more influence than if it were a panel study: and our primary goal is to influence practice.

These two reasons interact. If FPR were a panel study, quite probably the foundations included would improve more than those who are not, and we would gain zero information about the set which are not included. They might well diverge over time. We therefore would not get a sense of the sector as a whole.

Given that the sample changes, how can FPR make year-on-year comparisons?

The technique of studying a randomly-selected subset of relevant entities is used in many surveys of public opinion, including consumer confidence and voting intention. Typically, those survey 1000 randomly-chosen adults from across the country. The sample may be adjusted to make it  representative e.g., in terms of age, gender, the four nations of the UK. That is like FPR ensuring that our sample is representative in terms of foundations’ size. So, when you see news stories that voting intention has changed, those are almost certainly based on sequential studies of a small set of people, and that set is freshly-drawn each time.

Professor Stephen Fisher of Oxford University studies public opinion and was on the British Polling Council panel that investigated the 2015 UK General Election polls. He says:

“The methods that FPR uses are very sensible. Dividing foundations into five groups according to how large those foundations are, and then randomly selecting foundations within each group should ensure a broad and representative sample overall. Opinion polls aren’t perfect, but they typically get the share of the vote within a margin of error of +/- 4 percentage points. They come from sampling around 1 in every 30,000 voters. FPR is sampling 1 about in every 3.5 foundations: a much larger proportion of the total, and with much more coverage of the bigger foundations. On that basis, fluctuations in the FPR due to random differences in sampling should be very small indeed.”

Making year-on-year comparisons 

On the basis described above, it is rigorous to compare the full set of 100 foundations year-on-year. We made that comparison in the Year Two report – i.e., the first year when we had a previous year. In that report, we also included comparison of:

  • The set of foundations which were included in both years 
  • The set of foundations which were randomly included in Year One with the set of foundations which were randomly included in Year Two.

In each case, we assessed the changes in overall numerical scores and numerical scores on each of the three domains (diversity, accountability and transparency), and we looked at whether those changes were statistically significant. 

We will repeat and extend those analyses in subsequent years.  

The FPR Year One (2021/22) and Year Two (2022/23) reports are here.

[1] Foundations can opt-in to the FPR: they can pay to be assessed. They are treated as follows. If a foundation wants to opt-in and happens to be selected randomly for inclusion, then it is treated as a normal randomly-included foundation: it does not pay and its results are included in the analysis of the main 100. By contrast, if a foundation wants to opt-in and is not selected randomly for inclusion, then it pays and is not included in the analysis of the main 100. This is to avoid selection bias in the sample.

Posted in Uncategorized | Leave a comment

How diverse are UK foundations’ staff and boards?

Very few UK charitable foundations disclose the diversity of either their staff or their boards, according to research by Giving Evidence for the Foundation Practice Rating (FPR). Of the 100 foundations included in the FPR in 2022-23, only six disclose the diversity of their staff, and only six disclose the diversity of their trustees. In total, nine of the 100 foundations disclose data on either staff or trustees.

Why does Foundation Practice Rating research this?

The FPR researches 100 foundations each year. The process for selecting those 100 foundations is discussed here. The research covers three pillars: diversity, accountability and transparency. In diversity, FPR (currently) looks at whether foundations disclose the diversity of their staff and, separately, their trustees, on various dimensions. FPR does not currently look at what that diversity is.

Nonetheless, during the FPR research, the FPR team notes which of those 100 foundations make those disclosures. It is only a small step for us to then log and collate their actual data on what the diversity is, and so – despite this being outside FPR’s scope – we have done this.

To our knowledge, these are the first data about diversity of UK foundations to be published, so we share them here in the hope that it is a useful contribution to the discussion. We intend to track these data over time, in order to show whether and how the diversity of UK foundations’ teams is changing.

The data about diversity of UK foundations’ personnel were gathered in autumn 2022 using the most recent materials published by the foundations at that time.

The foundations which disclose diversity of their personnel

The foundations that disclose these diversity data are:

TrusteesStaff
Esmée Fairbairn Foundation*Esmée Fairbairn Foundation*
Walcot Educational FoundationWalcot Educational Foundation
Joseph Rowntree Reform Trust*Barrow Cadbury Trust*
Blagrave Trust* Garfield Weston Foundation 
John Ellerman Foundation*Power to Change*
The Wellcome Trust

*Members of the funders group

We noted where a foundation disclosed diversity of its staff and/or trustees on the following four dimensions. Note that: (a) we used the foundation’s own wording and categorisations; and (b) to make the graphs legible, the graphs show only one data-point on each category:

  • Ethnicity (the graph shows people whom the foundation reported identify as something ‘other than white’)
  • Gender (‘other than male/man’)
  • Sexual orientation (‘other than straight sexual orientation’)
  • Disability (self-declared as having a disability)

Not all foundations reported their data in all four of these categories.

We excluded (from the numerator of the figures in the graphs) responses such as “prefer not to say” and show only data provided by individuals who disclosed their characteristics.

We are aware of the limitations of this approach – simply collating foundations’ disclosures in the way that they are reported – but hope that these data are useful nonetheless.

Some Key Findings

Race: Racial diversity exhibited significant variation among the foundations that shared data in this category. The percentage of individuals identifying as a race other than white varied greatly: from zero to above 60%. For instance, on trustees, Blagrave Trust reported that over 60% of its trustees identify as non-white, whereas the Joseph Rowntree Reform Trust reported 0% in this category. On staff, there was also variation, but a noticeable concentration around the range of 30% to 40%.

Gender Diversity: The information on gender diversity highlights the proportion of individuals who do not identify as male within the foundations that provided data in this regard. This category includes individuals who identify as women, non-binary, or another gender. Among the foundations that reported this data, there was a significant representation in the “other than male” category.

Sexual Orientation: Sexual orientation was the characteristic on which fewest foundations reported: only three foundations disclose information about this for trustees, and only one foundation disclosed this data for staff. It is worth noting that there is an ongoing debate within the sector regarding whether or not sexual orientation should be disclosed. This debate primarily concerns protecting individuals from potential harm that may arise from such disclosure versus liberation and challenging heteronormative practices.

Disability: Disability disclosure was also quite surprisingly low, considering 17% of the UK population is disabled (Census, 2023).

Staff diversity: one foundation, Esmee Fairbairn Foundation, splits its staff diversity data by type of staff. This allows for a more granular understanding of diversity within the foundations. The graphs show the split reported by Esmee Fairbairn Foundation.

Conclusion: The data presented provides a snapshot of:

  • The amount of diversity reporting by UK grant-making foundations, which is disappointingly little, and
  • The reported diversity of staff and trustees in those foundations who do disclose it.

We hope that these findings contribute to the ongoing conversation around diversity and inclusion within the grant-making sector, and help to promote more disclosure and encouraging further exploration of diversity-related initiatives.

More about the Foundation Practice Rating is here.

Posted in Uncategorized | Leave a comment

Surprising churn in the top UK foundations

How much churn is there amongst the largest UK grant-making foundations (by giving budget)? One might expect basically none, because huge foundations don’t get created very often, and foundations don’t compete for resources. Giving Evidence looks at these data each year for our work on the Foundation Practice Rating, and we find that there is a surprisingly high amount of churn. These are the data for the last few years.

Why is that churn there? We haven’t investigated so can’t say. Maybe it’s related to investment income – because success there might enable larger giving budgets. Maybe it’s related to other income, e.g., BBC Children in Need and Comic Relief, which were both in the top 5 in 2019, raise their grant budgets from the public, which one might expect to rise and fall.

Posted in Uncategorized | Leave a comment

Why the system for charities applying to foundations is so expensive, and what can be done about it

The system by which charities apply to charitable foundations for funding is dreadful. When I ran an
operational charity, I and the team spent far too long on it – often simply re-writing and re-
formatting the same information multiple times into multiple forms, many of them badly designed. Happily our organisation, Giving Evidence, was recently able to study why the system is as it is, and what might improve it. Download our report here. An appendix is here with further detail, including about the economic modelling. Several findings were pretty striking.


First, the analysis confirmed experience: the “cost of capital” for charities securing funds from
foundation is very high. We found it to be at least 5.6% across all UK charities. And particularly high for small charities, costing them at least 17.5% of funds raised from foundations. (I say ‘at least’ because all the assumptions in our modelling were conservative.) Companies don’t pay anything like that when they raise capital (e.g., through loans or issuing bonds or equities). The total cost to UK charities is at least £900m every year – about 1.5 times the revenue of the National Trust.


It’s a silly problem. Foundations are charities, so charities spending resources dealing with each other is friction, entirely of charities’ own creation. We should be able to solve this amongst ourselves. It’s not an external problem, like global poverty or encroaching authoritarianism, which is where our resources should go.


Sadly, the problem of talent and resources being wasted isn’t unique to charities:

Professor Sarah Gilbert, who devised the Oxford / Astra Zeneca Covid19 vaccination:
“Actually, raising funds had been my main activity for years… I trained for years to become really good at ‘doing science’…but… what I actually spend my time doing these days is, mostly, bringing
in the money. This [system] can be counterproductive for the cause of scientific research itself… ” (Vaxxers, by Professor Sarah Gilber & Dr Cath Green)

Second, we found foundations more willing to give up time to discuss this topic than charities. That was a surprise because the pain and costs are borne by charities.


Third, building on the above, not all our charity interviewees considered the cost of dealing with
foundations to be a problem at all. Some – particularly in large charities – just felt that “the pain is part of the job”: it’s inevitable so no point fretting about it. Perhaps they have that view because large charities can afford specialist teams to deal with foundations – people rarely speculate on abolishing their own job – and to some extent, large charities compete on their ability to navigate this maze.


Fourth, the problematic system arises mainly because each foundation designs its own process to suit itself. They typically make little use of other foundations’ existing processes: e.g., few new foundations simply copy another foundation’s application form, which would reduce the re-formatting. And few foundations design their process to minimise workload for charities. I have often seen this in my work advising foundations. Indeed, most new foundations create written application forms precisely because they see other foundations having them, and they assume that best practice is good practice. On many issues, my book of guidance for donors urges them: “don’t just copy”.

Fifth, we studied in detail the economic effects of various initiatives which aim to reduce the costs of this application system. For each initiative, we found non-mad circumstances in which it would help, and non-mad circumstances in which it would hinder. Hence reformers should be cautious. We analysed: shared application forms; shared application forms with pooled funds; and online systems for matching foundations with charities (somewhat like online dating systems). Such initiatives will help only if:

  1. They save more money than they cost, and
  2. Any other effects are tolerable.

These conditions are not always met. For instance, there are sensible circumstances in which a
matching system will cost more to create, promote and run than it saves. The same for setting up pooled funds between multiple funders, negotiating which can be famously complicated. On the second condition, sometimes a pooled fund will alter where funds go – some areas will gain and others will lose – and that change must be acceptable. It took sophisticated economic modelling to reveal these effects.


Lastly, despite all the above, this looks like a solvable problem. It is solved elsewhere:

  • UK universities have long shared a single application form.
  • In the US, over 900 higher education institutions share an application process: Common App was created expressly to increase equality.
  • A system called Lightning is shared by various UK charities and public sector institutions to get funding to people in financial hardship.
  • And the UK government is piloting with four departments an online system for SMEs and charities to apply for government funding. It is designed to solve the twin problems of (i) discovering what funding streams exist and (ii) avoiding duplicated information requests.

The requests from fundraisers are relatively straightforward. The campaign #FixTheForm found big demand simply for more clarity on foundations’ eligibility criteria, and forms which you can save part-way through completing and return to later.

This problem arises from both an information problem – foundations do not see or know the costs they create – and an incentive problem – it is not a problem for foundations. Some foundations care a lot, and are working to reduce it. The prize here is releasing substantial resources for improving society and the environment, and that is surely worth considerable work.

This work was enabled by the Law Family Commission on Civil Society which exists to “unleash the potential of civil society”. A considerable ‘leash’ on civil society is this kind of inefficiency and cost of raising funds.

An article in Civil Society (a magazine) about this research is here.



							
Posted in Uncategorized | Leave a comment

Getting evidence to influence public policy

Many researchers want their research to influence public policy Many charitable donors also want to influence / improve public policy and often fund the production of research and other activities to that end. Sometimes it works, other times it doesn’t. What raises the chances of success? And how can a donor or researcher predict which opportunities or approaches are likely to be fruitful? Giving Evidence was hired by a large foundation to find out. We worked with On Think Tanks, and are here sharing some of what we found and learnt. We hope that it is helpful to you!

The foundation funds the production of research and ‘evidence into policy’ (EIP) activities. It focuses on low-income countries. Most of the researchers whom it funds are based in high-income countries. Often those researchers form partnerships with public sector entities they seek to influence: those can be national government departments (e.g., department of education), central government functions (e.g., office of the president), other national public bodies (e.g., central bank), regional or local governments or municipalities. Those partnerships take many forms: varying in their resource-intensivity, cost and duration.

Our research comprised:

  • Review of the literature about evidence-into-policy. This was not a systematic review, but rather we looked for documents, and insights within those documents, that are particularly relevant to the types of partnerships described. Our summary of the literature is here.
  • Interviews with both sides: with various people in research-producing organisations (universities, non-profit research houses, think tanks and others), and some of their counterparts in governments and operational organisations. Summary of insights from our interviews is here.

We also did a lot of categorising and thinking.

First, all evidence-into-policy efforts must have these three steps:

A. Decide the research question

B. Answer the question. i.e., produce research to answer the question

C. Supporting / enabling implementation: e.g., help policymakers and decision-makers to find, understand and apply the implications of the
research; disseminate the findings; support with implementation.

We have found this categorisation useful in various ways, including:

  • Checking that there is activity at all three stages. For example, if somebody does A and B but not C, the research is unlikely to have much effect. Equally, there are sometimes initiatives to identify unanswered or important research questions (A) but no capacity to then answer them (i.e., no proceeding to B or C).
  • Research seems to be much more likely to influence policy or practice if policymakers or practitioners (‘the demand side’) are involved at A, i.e., in specifying the problem.
  • But in much academic research, there are few/no policymakers or practitioners involved at A: the research question is decided purely based on what is ‘academically relevant’ or the academics’ interests. In that model, the researchers’ first main contact with people who might use the research is at C: and the research might be into a question which is of no interest to anybody else. We have sometimes called this approach: “here’s that report that you didn’t ask for”. It is hardly surprising if this model does not create much influence.
  • Clearly, an organisation’s choice of what it does at each stage of ABC is a way of articulating its theory of change.

Some key findings
A first comment is that we see many organisations which run interventions at a small scale (and funders who fund them). We often advise funders to support more systemic work, which, if successful, will influence much larger, existing budgets and programs. We liken this to how a little tug can direct a massive tanker. Much good philanthropy is about tugs and tankers. This project was a welcome and important opportunity to think about the relative effectiveness of various types of tug. We found that:

  • Organizations have diverse approaches / models for evidence-into-policy (i.e., theories of change) and therefore many different forms of partnerships. The organizations in the set vary considerably in their ABCs: for instance, at A, who is consulted and involved in determining the research questions, and whose priorities are involved? At B, who is involved in producing the research, eg., what countries are they from? At C, what dissemination channels are used, and what engagement is there with potential users?
  • The most substantial partnerships that we found are between research-producers. Many of those (e.g., research networks) involve more frequent contacts than do partnerships between research-producers and policy-makers.
  • Evidence of outcomes (the benefits) is scarce and patchy: Clearly, we understand that working on change at scale and/or doing system-related work often cannot be formally measured, in the sense of doing rigorous, well-controlled studies which indicate causation. Yet there could be more routine collation of outcomes (i.e., changes in the world which can reasonably be argued to be related to their work: we call these their ‘outcomes’.)
  • It is hard to be precise about the costs (inputs) of the various organizations’ EIP work
  • Unrestricted funding makes a big and positive difference. This is mainly because opportunities to influence policy often come up with quite short windows, so flexibility is key.
  • We found considerable interest in becoming better at EIP – across funders, researchers, synthesers, distributors. This may be sign of growing sophistication in the field: whereas 10 years ago, the main focus was on producing research (rightly, as there was then so little), now it is more on influence and change and improving lives.

Much more detail is in the reports. We hope that they are useful to funders, research-producers, research-translators, and research-users!

Giving Evidence director Caroline Fiennes talked about these topics at the Global Evidence & Implementation Summit in Australia in 2018: video below. To watch, click on the photo and wait a second. You may need to log in – any email address is fine. Excuse the didgeridoo interruption!

Speaking (2)

The Evidence System (framed by Prof Jonathan Shepherd):

Posted in Uncategorized | Leave a comment

The curious relationship between the number of staff and number of trustees in foundations

UK charities, including foundations, are unusual organisations in that it is pretty common to have more trustees than staff. The trustees are non-executive directors, they are almost invariably unpaid, and collectively comprise the board. It is rare in businesses and the public sector for organisations to have more non-executive directors than staff.

Giving Evidence examined 100 UK charitable grant-making foundations. Among other things, we looked each foundation’s number of staff and its number of trustees. Amongst the 100 foundations we assessed, having more non-executives than executives is nearly twice as common as the converse:

  • 61 foundations have more trustees than staff
  • 33 foundations have more staff than trustees
  • 4 foundations have as many staff as trustees

As the graphs below show, the variation in the ratio is huge: from having ten trustees and just one staff member, to having nearly 200 staff per trustee. That latter is Wellcome, which is an outlier, but even the second highest ratio has nearly 60 staff per trustee.

________

We gathered these data for the purposes of the Foundation Practice Rating: that is outlined below, though these particular data and analyses do not relate to the FPR’s core purpose. However, there is no existing data-set on the number of staff and trustees per foundation: to get that, you have to gather the data yourself by reading annual reports etc. That is laborious. We did it for FPR, because we needed the data for scoring foundations. So, having an unusual data-set, we thought that we would analyse it a bit and publish it.

We hope this is a useful contribution to the field 🙂

The Foundation Practice Rating is an independent assessment of UK grant-making charitable foundations. It assesses foundations’ practices on diversity, accountability and transparency, and does so using only their public materials. It is funded by 10 UK foundations, and is repeated annually. The sample of 100 foundations comprises: the 10 foundations who fund it, the UK’s five largest foundations, and the rest are selected randomly. Foundations have no choice of whether they are included. The criteria are based on precedent elsewhere, and a public consultation. They do not assess what a foundation funds, nor its effectiveness as such. Each foundation is rated A (top), B, C, or D on each of the three areas, and is also given an overall rating. 2022 was the rating’s first year: the results were released in March 2022. More information is at http://www.foundationpracticerating.org.uk

These are the foundations with the conventional arrangement of more staff than non-executives:

and these are the foundations in our sample with more non-executives than staff. Notice how they are more in number than the set above.

[Note that four foundations are marked here as having just one trustee each. In those cases, ‘the trustee’ is an institution (and often the foundations are old, with links to the Corporation of London). For example, The Mercers’ Charitable Foundation’s one trustee is the Mercers’ Company (a City of London livery company). The Drapers’ Charitable Fund’s one trustee is The Drapers’ Company (another City of London livery company). The Resolution Trust’s one trustee is The Resolution Trust (Trustee) Limited, a company with four directors. An example from outside our sample is Bridge House Estates Trust, whose one trustee is ‘the Major and Commonalty and Citizens of the City of London’.]

Posted in Uncategorized | Leave a comment

Having Too Few Personnel Compromises Foundations’ Performance on Key Issues

A clear finding from the Foundation Practice Rating Year One research – which assessed 100 UK-based charitable grant-making foundations – is that foundations with few trustees, or few staff, tend to perform poorly on diversity, accountability and transparency. This matters because UK charities told us that those three areas are important. Foundations with no staff perform particularly poorly on these issues.

The Foundation Practice Rating is an independent assessment of UK grant-making charitable foundations. It assesses foundations’ practices on diversity, accountability and transparency, and does so using only their public materials. It is funded by 10 UK foundations, and is repeated annually. The sample of 100 foundations comprises: the 10 foundations who fund it, the UK’s five largest foundations, and the rest are selected randomly. Foundations have no choice of whether they are included. The criteria are based on precedent elsewhere, and a public consultation. They do not assess what a foundation funds, nor its effectiveness as such. Each foundation is rated A (top), B, C, or D on each of the three areas, and is also given an overall rating. 2022 was the rating’s first year: the results were released in March 2022.

Nearly two-thirds of foundations with no staff scored the lowest grade of D. By contrast, of foundations with staff, only 10% scored a D. No foundation with more staff than 50 staff scored D.
There is a similar pattern in the number of trustees. No foundation with more than 10 trustees scored D, and Ds were much more common among foundations with few trustees. The graphs below show the distribution of scores by a foundation’s number of staff (left), and its number of trustees (right).

Our hypothesis is that foundations with too few people don’t have enough person-power to do the work required to have and disclose good practice in these areas. (The rating only uses publicly-available materials, so policies need to be disclosed in order to be included.) It takes people to create and publish policies, or to clearly disclose investment policies, gather and publish grantee feedback and the actions arising from it, to publish clear grant criteria, create ways for people with disabilities to contact the foundation and apply, to maintain whistle-blower systems and complaints processes, etc.

We appreciate some foundations’ concern about minimizing their internal costs (e.g., on staff) in order to maximize the amount available for grants. But the other factors captured in the rating also matter: charities told us this in the consultation, and the rating’s criteria reflects what they value in foundations and what they want from them. Charities (and other social-purpose organisations) are surely foundations’ main stakeholder (one might call them ‘clients’ or ‘users’), so it is important that foundations listen to them. The data above suggest that having too few staff in a foundation may be a false economy: it may save money, so increase the amount available for grants, but at the expense of being able to operate well and transparently.

In some foundations, the work is done by the staff and the trustees (who collectively comprise the board), provide oversight and direction. In other foundations – notably those with few staff or no staff – the trustees may do much of ‘the work’ themselves.

Having too few staff and trustees may also inhibit effectiveness: it seems quite possible that having more staff and trustees provides a larger network from which to source ideas (about anything) as well as potential grantees; and that having too few trustees makes for inadequately diverse experience involved in the board’s decision-making, resulting in sub-optimal decisions.

That said, it is clearly possible to score (pretty) well with few staff. One of the only three foundations which scored ‘A’ overall has only five staff; and five of the foundations which scored ‘B’ overall have no staff or only one staff member. Perhaps a foundation’s culture and intention to be open / external orientation influence its practice and hence its score. Any foundation can decide to disclose the items that the rating seeks.

Giving budget per staff member

Looking at this through another lens, we examined the giving budget per staff member across the 100 foundations. This figure varied widely: from over £7m/ staff member (Leverhulme Trust) to ~£30,000/ staff member (Franciscan Missionaries of the Divine Motherhood Charitable Trust). Clearly, for the 33 foundations which have no staff, we cannot calculate this number. Of course, we realise that giving models vary considerably and therefore that comparing foundations on this ratio is not a perfect like-for-like comparison.

The graph below shows the range of giving budget per staff member, and each foundation’s overall rating.

Working upwards from the bottom of the graph below, the foundations with no staff generally rated pretty poorly: of the 33 foundations with no staff, 21 scored D, nine scored C, and only three scored B.

Then let’s continuing upwards to foundations which do have staff. Amongst those foundations with small giving budget per staff member, performance is pretty good: none of the 43 foundations with the smallest giving budget per staff member scored a D. The first D that we encounter working upwards – i.e., the foundation with the smallest giving budget per staff member which scored a D is Yesamach Levav, which gives £930,000/staff member.

As giving per staff member increases from there, performance tends to weaken: of the ten foundations with largest giving budget per staff member (i.e., the top ten in the graph below), five scored D, four scored C, there is only one B and no As.

This again shows that having staff who are stretched across large budgets correlates with poor practice.

Overall rating of the FPR Year One sample of 100 foundations, by giving budget per staff member:

Posted in Uncategorized | Leave a comment

One donor’s fantastic work to encourage use of evidence, and production of more, to fight factory farming

This article appeared in Alliance Magazine’s special edition about food systems. It shows a powerful approach to using and producing evidence which donors could use an any sector.

Moving to a sustainable and fair food system is a giant challenge, and the organisations driving it are small compared to the problems. So it is crucial that they are as effective as possible. That requires basing their work on sound evidence: about precisely where each problem is, why it arises, the relative sizes of the various problems, and what approaches work to address them.

Giving Evidence has been working on this for some years, with a US-based foundation, Tiny Beam Fund. Strikingly, its settlor, Dr Carmen Lee, is an academic librarian so knows about making information findable! Much of the work that we have done together seems to be ground-breaking but could usefully be replicated in almost any sector. I’ll here explain our various endeavours.

The impacts of industrial farm animal production, where animals spend their entire lives in pens too small to even turn around, could lead to the next global pandemic.

Carmen’s focus is tackling negative effects (especially in low-income countries) of industrial farm animal production (IFAP) – battery hens, pigs, cattle etc., Many of them spend their entire lives in pens too small to even turn around, stuffed with antibiotics which get into the water system, are thus consumed by all of us, which raises antibiotic resistance, ‘widely considered to be the next global pandemic‘.

Carmen says: ‘Tackling IFAP is poorly understood. It is not an established field with well-travelled paths. It is more like a mediaeval map with lots of space marked ‘here be dragons’. We don’t want to fund daredevils who just jump in to grapple with the monsters. Instead, it is hugely important for everyone to gain a deep, nuanced understanding of the complex contexts and problems. So we decided to fund the systematic acquisition of this understanding.

‘How to acquire this knowledge? Here’s where academic researchers enter the scene. Their training and skill set is well-suited to this task. Academics are also increasingly interested in studying industrial animal agriculture’s impacts in low-income countries for their own scholarly purposes.’

Identifying the ‘burning questions’

The unanswered questions are numerous, dwarfing the available research capacity to answer them. So it is essential to focus on the most important questions.

To do this, we borrowed and adapted a process developed in healthcare /medical research, by the James Lind Alliance. It elicits from patients, their carers and primary physicians (the intended beneficiaries of research) the questions that they most want researched, and has them collectively prioritise them. Obviously, we couldn’t ask the pigs, but we involved anti-IFAP campaigners, observers and researchers.

Tackling industrial farm animal production is poorly understood. It is not an established field with well-travelled paths. It is more like a mediaeval map with lots of space marked ‘here be dragons’.

We invited under-researched questions of many types: where the intensive animal farming is, how it has arisen, who is doing what about it where, what is effective in what circumstances, ‘basic research’ about animal preferences, how the funding flows, what laws are in place and how /whether/ where they are enforced, etc. That is, the research could be ‘just’ gathering data, and/or it could be evaluating potential fixes.

The resulting list of prioritised questions is published. The questions rated highest priority are about systems (e.g., policies of the World Bank, World Trade Organisation; and the effects of free-trade negotiations) and what works against various goals, e.g., changing consumer behaviour.

We call this The Burning Questions Initiative (BQI), and it repeats every two years. Tiny Beam Fund then offers fellowships and research planning grants for academic researchers to answer those ‘burning questions’.

We have talked with other funders to encourage them to also fund research into these priority questions.

Using existing research, as well as producing new research

As a librarian, Carmen suspected that research exists which practitioners could use more / better against IFAP. But it is not always easy for practitioners to find, interpret and apply academic research: it lives behind paywalls, is written in academic-speak, rarely includes enough about how to run the intervention, etc. And academics’ incentives are to publish and teach, not to engage deeply with practitioners.

Perhaps money would help

We dreamt up a fund to help operational nonprofits to make better use of academic research and researchers. Funds might be used to hire an academic a day a week for six months to help find and interpret research relevant to strategic planning, or to run an academic advisory panel, or to buy access to academic journals. We ran a feasibility study – and about five minutes after reading our report, Tiny Beam launched such a fund. It is called the Fuelling Advocates Initiative (ie., fuelling them to be better by using research to improve effectiveness). TBF has made various grants in this programme: an example one allows an NGO which is setting up a regional office in Asia to use academic experts to help define its scope of work in Malaysia, Indonesia and Thailand.

The dragons

There are structural impediments to collaborations between practitioners and researchers. Carmen reports learning ‘that getting practitioners /advocacy NGOs and academic researchers to work together on factory farming issues is not simple, especially in LMICs. Researchers and scientists in these countries often shy from NGOs /practitioners involved with advocacy and activism. When NGOs ask researchers to help answer urgent questions, or when researchers hear that their research will support organisations concerned with factory farming, many of them say ‘no, sorry, I can’t help’ – even though the research would be purely scientific (e.g. collecting and analysing data about the use of antibiotics in livestock farms). One reason is that the researchers or their universities receive considerable funding from the animal agriculture industry. Other reasons include that academics find working with NGOs to be frustrating because NGOs don’t know how to communicate with academics (who operate in a different universe); and very different time-frames (for an academic, a year is not long, but NGOs think that’s ages). That said, some academics are willing to help, and some NGOs engage well with academics. One should be mindful of these issues, and should not be surprised to hit brick walls.’

As mentioned, almost any sector could use these methods to increase the quality and use of evidence, and thereby make practitioners more effective.

Posted in Uncategorized | Leave a comment

Reducing the Administrative Burden Placed on UK Charities by UK Donors and Funders

Giving Evidence is delighted to be studying funders’ application processes – to try to figure out how to reduce the costs that funders create for operational nonprofits. This is a hugely important topic, so we have written about it publicly, and have been seeking for a while to work on it: we have now teamed up with the Law Family Commission on Civil Society, run by Pro Bono Economics, which exists to ‘unleash the full potential of civil society’ because a considerable ‘leash’ (constraint) on civil society organisations is the costs they bear from charitable funders through application and reporting processes.

What is the issue here?

Charities and civil society organisations (CSOs) spend masses of time (=money) applying to funders. If they do not get the funding, most of that cost is wasted: specifically, it reduces the amount of work and good that they can do with their available resources. So we can think of it in terms of the efficiency of the process (we mean ‘efficiency’ in the mechanical, engineering-type sense, i.e., the amount of output achieved for a given amount of input, vs the amount that is wasted.) In economics-speak, application costs raise the cost of capital for CSOs.

Application processes are created by funders. Some of the costs are borne by them (e.g., their staff time reading the forms) but other costs fall on non-profits: they are ‘externalised’ by the funders, and so are invisible to the funders, and rarely actively managed by them. {There are some honourable exceptions: BBC Children in Need is one.} Hence, in a bad case, it can happen that a funders’ process creates so much work for other organisations that its costs exceed the amount being given – without the funder even realising. We have seen instances of this.

Most funders have their own application forms and processes. That increases work and wastage. And some funders invite way more applications than they need. So we seek to reduce this wastage. (Some analysis here by Time to Spare of the scale of that wastage. It thinks that 46% of UK grants cost more than they’re worth.)

Johann Sebastian Bach:

Oh, that’s my application to the court. If I get it, I’ll have spent all of it on that very application.

From Bach and Sons, a play by Nina Raine[1]

But it’s not trivial, for various reasons. First, some application processes are helpful to applicants, even if they do not get the funding. Second, the application process may serve some useful purpose for the funder, such that ditching it would be an error. And third, we are – always! – alive to the possibility of unintended consequences.

Now is a particularly good time to work on this topic because many funders changed their practices in response to the pandemic, becoming faster and leaner – so may be open to embedding new practices. For instance, 67 funders in London collectively created one-stop shop application processes with a slimmed-down application form, and funders in Jersey also collectively created a new collaborative process.

Our approach

Is to treat this as a behaviour change exercise. That is, we are not looking simply to document these costs, but rather to understand why and where they arise, the benefits to the various players as well as costs, and to identify the likely effects of approaches which might reduce them.

We seek to really understand the likely effects of possible fixes, as well as their likely take-up. This seems to be an innovation in discussions and work on this issue.

What we will do

Clearly we are learning from existing work on this issue and seeking to augment it. So our workstreams are:

Understanding what the current behaviours are and why they arise. We have interviewed foundations – and crucially also some operational nonprofits. We seek to understand what funders are trying to achieve with these processes, e.g., to their reduce costs, to identify the strongest applications / organisations / approaches, to reduce costs to applicants, limiting work for their teams, limiting applications to reduce costs to applicants. We have investigated how operational charities decide whether it is worth applying to a particular funder – the extent to which they take into account the costs of applying and the chances of success (the ‘expected value’ of their application). As far as possible, in these interviews, we have looked at other aspects of funding practice, such as restrictions and grant duration.

For example, here is the Hewlett Foundation describing how it designed its processes to minimise burden / maximise effectiveness of its grantees. (Yet another reason that I have a massive crush on the Hewlett Foundation…)

Discussing possible fixes with foundations and experts. We have interviewed foundations and sector experts about potential fixes, whether new or already attempted. We are aware of, for example, the #FixTheForm initiative; a study a while ago by NPC called Turning the Tables; a study by the University of Bath; the feedback about funders being gathered by GrantAdvisor; Project Streamline in the US; and various attempts at shared applications and shared reporting. The goal is to gain insights into the dynamics that they have encountered, and their views on drawbacks and feasibility of various proposed fixes.

Economic modelling: Understanding the likely effects of potential fixes. What people say they will do and what they actually do are often different! As well as listening to foundations and charities, we are doing some economic modelling, to identify the behavioural changes that can realistically be expected, the scale of savings that potential fixes might have, and to whom – and also to uncover unintended consequences (including adverse effects) which might arise from changes to the system. This approach is different from much of the research in the charity sector: we hope that it will bring additional insight.

If you have worked on this issue before – in any country – please get in touch! We would love to hear from you.


[1] This play is new and only on-stage and the script not yet published: I may have misremembered this quote a bit.

Posted in Uncategorized | Leave a comment