Figuring it out; there is no evidence that the most effective charities really do have high admin costs

Figuring it out; there is no evidence that the most effective charities really do have high admin costs

Back in May 2013, some data was published by Giving Evidence in the UK and GiveWell from the States, which they claimed showed that the most effective charities had high levels of admin costs. In December, the Guardian Voluntary Sector Network said it was one of the most read blogs of the year and Giving Evidence headlined a piece in their newsletter ‘Good charities have admin costs’. In their original press release they said:

It’s unarguably wrong of donors such as Gina Miller to suggest that admin costs be capped. The data indicate that such caps would nudge donors towards choosing weaker charities, at untold cost to their beneficiaries.”

The debate on admin costs is important. Our research shows that donors are very keen to make sure their money is well spent and don’t like admin costs. Equally, we know that many within the sector don’t like the fact that charities are often judged by the size of their admin costs, a sentiment we would share. However, it is important that we base our case against using admin costs on solid evidence. So with a bit of spare time over the Xmas break, we thought we would examine the data behind the claim that effective charities spend more on admin. This is what we found.

The data for the original study came from charities that GiveWell has analysed in the US. The argument is that the charities that GiveWell ranks highly (i.e. the 11.5% group below) have a higher level of admin costs than those they said were not so good or didn’t rank (i.e. the 10.8% below group).

In a nutshell, we found:

  • There is no statistical significant data that shows that high performing charities have higher admin costs – not even close
  • This is because the sample size is way too small (only six in one group) and the differences in admin levels too small
  • The charities studied are from the US and mostly work in international development
  • Admin costs have no definition in UK charity accounting – they are the amount that remains when everything else is accounted for
  • Even if the data were valid, we wouldn’t know whether high admin levels are the cause of effectiveness or the effect of something else, such as an effective CEO

Now in more detail…

The number of charities in the sample is simply too small

To recap, the headline said that effective charities spent 11.5% on admin, whereas ineffective charities spent 10.8% on admin for 2010-11. There were only six (yes just six) charities whose data was included in the 11.5% figure and just over 20 for the 10.8% figure. This isn’t anywhere close to being statistically significant (the data has a p value of 0.89 for those of you who like those things).

Equally important, within the groups there was a huge range of values. For example, the lowest level of admin costs in the ‘high-performing’ 11.5% group was 2% (so how does that fit with the argument that ‘good charities have admin costs’ - we really don’t understand) and the highest 22%.

There was another sample group from 2008-9 which had slightly different headlines. Here, the high performing group had admin levels of 10.2% and the lower performing group had admin levels of 9.4%. The numbers in that sample still aren’t high. The high performing group had 37 organisations in it. So, better than the 2010-11 group but still not strong enough to call the case “unarguable”. One analysis on the Freakonomics website in support of not using ‘overheads’ as a way of judging charities found no statistical significance in these numbers (more of this below).

One brain-scratching conundrum that eagle-eyed numbers geeks may have spotted is that the high performers in 2008-9 had a lower admin level (10.2%) than low performers in 2010-11 (10.8%). So, is it the relative level of admin that is alleged to make the difference or the absolute level? If it’s the latter, then 2010-11’s low performers would have been just fine in 2008-9. Or they would have ruined the arguments that the authors wanted to make.

Here is the authors’ helpful further breakdown of the admin levels for 2008-9:

Average % spent on admin (2008)

n=

Gold

16.00

2

Silver

11.48

4

Notable

9.73

32

Not ranked well

9.47

200

 

So if we remove the six gold and silver organisations from the 2008-9 data, the difference between the high performers and the low performers is just 0.25%. Is this difference statistically significant? No, it isn’t. Whether measured with or without the gold and silver performers, there is no meaningful difference between the high performers and the rest in terms of admin costs.

The sample for the study is largely international charities and the rest are US-based.

The charities in the sample are three-quarters ‘international’. The rest are ‘United States’. It’s hard to see how a sample like this is relevant or applicable to medical, disability, social welfare or environmental charities, even if the basic data was valid.

The point about studying US charities (international or other) is that we don’t know whether US financial accounting can cross the Atlantic. One reason to be sceptical about US data is that in the UK, there is no definition of admin under the Charity Commission’s SORP. Admin is general classified as the expenditure that is left when everything else is taken out (mostly charitable activities and fundraising). So, different charities will treat the cost of buildings, evaluation costs, secretarial costs and the like very differently, meaning it’s very hard to compare admin costs even within the UK. Even if data like this is relevant in the States, it’s hard to see how it has much value this side of the Atlantic.

More generally, the sample is important in order for a study like this to be useful. This is because we need to know that the sample of charities analysed has a clear rationale, say by income, e.g. 10 organisations in each of 10 income bands, or from 10 different fields of work. At the moment, it’s very hard for any other organisation to replicate the analysis. At the very least, a key thing we need to know is how those organisations analysed by GiveWell are chosen. Are they chosen at random, do they pay to be analysed, or does GiveWell analyse those that funders want to analyse perhaps? This approach to picking a sample for research would never withstand scrutiny in any medical trial.

What do other analysts say about this data?

A 2011 article by Dean Karlan on the website Freakonomics entitled ‘Why ranking charities by administration expenses is a bad idea’ (a sentiment with which we wholly agree) analyses the GiveWell data. His analysis shows that there is no difference in the admin costs of the high and low performers, with a p value of 0.35 (in stats speak, this figure would need to be closer to 0.05 for it to be statistically significant). Remember this is the same data that Giving Evidence says makes the case “unarguable”. Even more interesting is that his analysis also covers fundraising costs, which he says show a statistically significant difference for high and low performers. Confused as to how the same set of data can lead to such a different conclusion? Us too.

Where are the fundraising costs?

So this leads us to another point. Adding together the programme costs and the admin costs for both high and low performing charities gives around 92-93% of total income. This implies that fundraising costs are around 7% (though we have no evidence for that, other than the income that is left unaccounted for). If fundraising costs are around 7%, then that is low compared to the UK, where they might be more typically around 20% or higher. So where are the fundraising costs in this data?

If better charities had high admin costs – would it be cause or effect?

Let’s suppose for a moment that the data is accurate and the admin difference is statistically significant. The question would then be whether the difference in admin costs is the cause of greater effectiveness or that high admin costs and effectiveness are just the results of another factor, such as an energetic, focused and strategic CEO. Just because there is a link between admin costs and effectiveness doesn’t automatically mean that high admin costs cause effectiveness. They could both be impacted on by something else altogether. Put another way, any charity which increased its admin costs in the sole hope of being more effective would be sorely disappointed.

What’s our conclusion?

We believe that there is no evidence that stands up to scrutiny about the level of admin costs and the performance of charities. However, the basic premise that the level of admin costs is a poor way to judge charities is one we would absolutely agree with. One problem is that a charity’s admin costs may tell you more about the skill and financial wizardry of a finance director than anything else.

All our research with the public suggests that they do care deeply about levels of admin costs because they see them as synonymous with waste. Our real challenge then is to give donors an alternative to admin costs as a way of judging charities. In this sense, what GiveWell are doing in the States with assessing charities is welcome. However, that does not justify using those assessments to make exaggerated claims about admin levels based on weak or inconclusive data.

Oh and remember Gina – it’s a free world. You’re a donor. You can argue what you feel. It’s the job of charities to change your mind and if necessary find some hard data to prove our case.

Joe Saxton and Michele Madden
 

A nod of the head? A furrowing of the brow? Leave us a comment below.

Submitted by Caroline Fiennes (not verified) on 23 Jan 2014

Permalink

Caroline Fiennes here: I published this research.

First, we welcome a debate about the quality of research. A good deal of research published in this sector is garbage (both about the sector and about 'impact'). We are very interested in - and hence soon to publish about - research quality.

A few points of fact:
- the work was published by Giving Evidence, not as stated here by Giving Evidence & GiveWell.
- It's Dean Karlan, not Kaplan.
- the sample is indeed small. We make no attempt to hide this, as witness by the fact that we published all the data. However as we said publicly at the time and have done so repeatedly since, it's bigger than zero, which is the modal sample size in these discussions. Indeed ours were the first numbers in this debate AT ALL.
- the reason the sample size is small is obvious from the method used: we took all the datapoints available in GiveWell's analysis.
- on the '2%', the data for admin costs of 'high', 'medium' performing charities are AVERAGES, as stated. Clearly then some points will be below the average and some will be above.
- You say: 'a key thing we need to know is how those organisations analysed by GiveWell are chosen'. GiveWell published details of that, here: http://www.givewell.org/international/process

A few other comments:
The way to solve a small sample size is to get a bigger one, not to bitch about the small one. If npfSynergy, or anybody else, has a bigger one, we will be the first to be interested in it and applaud it. Similarly, you say that the data are mainly for US charities and for int'l development charities. That is because of GiveWell's process. If you or anybody else did similar analysis of charities in other sectors or countries, we'd all be interested to see it.

And finally, of course we can only have this discussion because Giving Evidence published our method and the raw data in full. We look forward to a world in which all analysis and data about charities, funder and results are published in full.

Submitted by Simon Bernstein (not verified) on 24 Jan 2014

Permalink

Great detective work Joe and Michele! I thought there was a funny smell about that piece of research ... a little like garbage in fact.

Submitted by Simon McGrath (not verified) on 24 Jan 2014

Permalink

Thanks for highlighting such an important issue: charity admin costs and the need to robustly analyse any claimed link with effectiveness.

Giving Evidence deserved credit for publishing the original data. But I am surprised that Caroline Fiennes both lambasts research standards in general then says "The way to solve a small sample size is to get a bigger one, not to bitch about the small one." A bigger sample size is a good idea - nothing wrong with a small sample size so long as the limitation are pointed out. In this case the limitattions are that, statistically, the differences in admin costs between the best and rest are meaningless. Unfortunately, this means the conclusion drawn by Giving Evidence that higher admin costs are associated with better performance are not valid. That is a pretty important point to brush under the carpet. Until we have bigger sample sizes we are in the dark on the true relationship between admin costs and charity performance.

Could I ask Caroline to explain what her comment on the sample size that "it's bigger than zero, which is the modal sample size in these discussions" means in plain english? I'm afraid it leaves me baffled.

Disclosure: I was contact by Joe Saxton for my views on the original data before he published the blog. I checked that the admin differences in sample size of 6 data was statistically meaningless (Freakonomics had already shown the same was true for the larger sample).

Submitted by John Brady (not verified) on 28 Jan 2014

Permalink

I think it's great - not geeky - as well as refreshing to have a debate around data and statistics especially as the topic of 'admin' costs is a hot potato. Its a debate that Radio 4 more or less would be proud. http://www.bbc.co.uk/programmes/b006qshd

My comments are as an outsider with no axe to grind for either protagonist . However even without drilling down and sample size argument etc etc then such a small differenece in admin costs 11.5% vs 10.8% on admin for 2010-11 does not justify such a headline grabber as the most effective charities send more on admin.

maybe its a personal hobby horse but it does jar when media headlines - whether mainstream or charity sector - run screaming headlines wsuch as "most people ..." when in fact it is 51% versus 49.

The report does stimulate debate which is good, I thought NFP critique was reasonable and not bitching. More power to everyone's elbow

Submitted by Caroline Fiennes (not verified) on 28 Jan 2014

Permalink

To answer Simon’s question: the mode is a type of average. It’s the value which occurs most frequently. For example, the modal ethnicity in France is French. So ‘zero is the modal sample size in these discussions’ means that most discussion have a sample of zero, i.e., are completely data-free. Ours were, the first data at all in this debate, to our knowledge.

On John’s point about More Or Less, well, funnily enough that’s presented by Tim Harford who wrote in the FT about Dean’s original analysis (“this ready reckoner [of admin costs] is enormously misleading.”) He then gives various examples of why admin costs are misleading. And he also wrote there about my book which gives numerous other avenues of logic and examples which show the fallacy of using admin costs to judge effectiveness. Amazon lets you read the chapter on that, Chapter 2, for free, here:
http://www.amazon.co.uk/Aint-What-You-Give-That/dp/0957163304/ref=sr_1_…

Tim’s piece: http://www.ft.com/cms/s/2/3b1ef29e-6d74-11e1-b6ff-00144feab49a.html#axz…

Submitted by John Brady (not verified) on 28 Jan 2014

Permalink

Thanks for those links Caroline. I will check them out, particularly the book chapter. Tim Harford a really good presenter particularly on engaging the public on data issues and topics.

And whilst I may not think there is statistical significance on the 11.5 v 10.8. I do agree that it is a fallacy to equate effectiveness -that is delivering on mission to beneficiaries and making a transformational impact on their lives - with admin costs.

Submitted by Simon (not verified) on 28 Jan 2014

Permalink

Sorry, Caroline, I was a little slow to get your joke.

RE"Ours were, the first data at all in this debate, to our knowledge."

Yes, and I applaud what you have done and hope this will be developed to the point that the data is able to support robust conclusions. The danger with a little data is that they can give a false impression of hard evidence. Which is why statistics, though dull, are so important in these discussions.

Submitted by Vincent Murphy (not verified) on 29 Jan 2014

Permalink

There are numerous small charities about run entirely by volunteers, with no admin costs. I have no idea as to the breakdown between effective and not so effective charities in this category, but for sure the difference has nothing to do with admin cost differences. In any business, and charities are also businesses, the level of administration is a function both of the needs of the business and the efficiency of management. It's possible for a badly run business to be effective in serving it's clients or customers. However in the commercial sector a financially badly run business will sooner or later fail despite good service to the public. In a charity there is no such imperative. As long as they can raise funds they stay in business. They can develop inefficient ways and high admin costs but still be effective in performing their charitable functions. I therefore think any attempt to link admin costs to effectiveness will fail. This is not to say that admin costs are unimportant. On the contrary they are very important because they they determine the proportion of a donation that goes to the intended beneficiary. So two things are required: 1. Some standardised measurement of admin costs which can be used in comparisons, and 2. Benchmark levels of admin costs which can used to assist managers to drive efficiencies and donors to evaluate management effectiveness. None of this is to suggest that statistical surveys and comparisons are not important: they are, but there has to be clarity about methodology and objectives.

Submitted by Miles Witham (not verified) on 31 Jan 2014

Permalink

Am pleased to see a healthy debate around this. As usual with data, nothing is perfect, but one has to start somewhere, and it is usually only after getting an analysis out that we are able to identify how to improve it and build on it.

I would agree that the most important next thing is to standardise the definition of admin costs - without this it will be very difficult to progress. The flipside is of course defining 'effective charity' - it strikes me that this is a harder thing to define, esp across sectors (an example: Cancer Research UK are a failed charity, as more people are dying of cancer now than at any time in history - discuss!)

My final point on this particular blog piece is the comment about causality. I agree that it doesn't necessarily follow that if there is an association between admin spend and effectiveness that increasing spend will increase effectiveness. Association isn't causation. However, it is possible that it might be a causal association, and so we should not be quick to dismiss this. What is needed here is an experiment; take a group of charities, examine year on year trends for 'effectiveness', see when they implement a policy change to significantly and rapidly increase their admin spend, and then track whether their effectiveness increases over the next few years compared to those without an increase in admin spend. Probably not going to get anyone to do an RCT, so this type of evidence is probably the best that we can hope to get.

Add new comment

The content of this field is kept private and will not be shown publicly.

Plain text

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.