What difference does it make?

Posted on 8 October 2020

Jonathan Haslam, Director, IEE

In a recent book chapter, I wrote that “if you implemented the currently available best evidence across all aspects of school practice, you would see an average improvement of +0.30, maybe +0.40 if you’re lucky. This is well worth having, but it doesn’t, for example, quite close the achievement gap between disadvantaged students and their peers.” I think this raises important questions about the realistic expectations about what evidence-based education, or actually just education, can achieve. I’m not sure that conversations around these expectations are always realistic, and instead they imagine some idyllic future where schools can solve these problems all on their own.

In this post I want to present this argument visually, to try and give a clearer picture of what I mean.

For some reason, presenting data as a normal distribution has gone out of fashion, but to me it makes a lot of sense.

In 2015, the Department for Education tried this illustration to show that there is both a gap between disadvantaged pupils and their peers, and that the two groups are intermingled.

DfE Illustration

The distribution of pupil attainment, disadvantaged pupils and others England, 2015 (state-funded schools)

But I don’t think this is terribly helpful, and I don’t know if it is possible to convert their “mean rank” difference into a standard deviation. So what is the difference as a standard deviation/effect size between advantaged and disadvantaged students? In the US, this article suggests it is between 0.8 and 1 standard deviation. This article from 2012 suggests that in the UK the difference between FSM (children receiving free school meals) and non-FSM is 0.63. This article by Stephen Gorard from 2019 I think puts the figure between 0.68 and 0.78, though again I may not have understood this correctly.

Part of the problem, of course, is the definition of disadvantage. Is it always FSM, ever-FSM, or some other measure? “Disadvantage” isn’t a binary measure, and the differences between some disadvantaged and non-disadvantaged students might be small. But let’s not complicate things even more.

So, I’ve settled on 0.75 standard deviations between FSM and non-FSM and assumed that, on average, 14% of students are FSM.

I’ve added a line for the numbers of students who would reach the expected standard in reading, writing, and maths. Why? Well, I want to get a picture of how much meaningful difference interventions are making, and, though this might be an arbitrary target, it’s meaningful in the sense that the more students who can reach this target, the more can go on to access the curriculum in the future, and then, basically, cope with life – budgeting, understanding paperwork, and so on.

Here, then, is the picture for 100 students:

So, as in the DfE’s image, it shows a spread of FSM students, usually behind their peers, but by no means all “failing”. Of the 14 pupils receiving Free School Meals, 8 will not reach the expected standard in reading, writing, and maths. Of the 86 pupils not receiving Free School Meals, 28 will not reach the expected standard.

So what impact might we expect to see on this profile from implementing a new approach? The Education Endowment Foundation (EEF) has now trialled more than 100 different approaches. None of these approaches were “bad bets” when the EEF first approved them for a trial. It’s not easy to get an approach trialled by the EEF, so these ideas were all educationally plausible, and just as likely to be chosen by schools. The average impact found in an EEF trial is an effect size of +0.06. What difference would this make to our average sample of 100 children? As shown below, it would make no difference to the number of FSM children reaching the expected standard, but a couple more non-FSM children would now make it. In an average class, maybe one extra non-FSM child would now reach the standard.

But these are the results of all the interventions that the EEF has trialled, and some obviously had better results than others. What if we just used the best? Well, the best, arguably, tested at a large scale, was the trial of Tutor Trust, which had an effect size of +0.25. What does that look like?

For our 100 students, this would mean that one more FSM student would now reach the expected standard, and six more non-FSM students. Or, in an average class, a couple of non-FSM students and possibly one FSM student.

In fact, the Tutor Trust picture would probably look more like this, since only students with lower prior attainment would receive the tutoring. The improvement now does see the lower attaining pupils somewhat closing the gap on their peers.

However, as mentioned at the outset, even the best approach doesn’t close the achievement gap. I think this means that we need to have hard conversations about what schools can be expected to achieve.

To persist with the myth that schools can overcome the impact of poverty is unhelpful. It shoulders schools with a responsibility that they shouldn’t have (and, similarly, the blame for failure when they don’t close the gap). It also means that many, many children are still “failed” by a system that’s waiting for a golden tomorrow when their disadvantage can be educated away.

At the same time, we still want to be in a situation where teachers and schools are using the “best” possible evidence-based approaches that lead to as many children as possible “succeeding”. I’m not advocating for schools to shrug their responsibility to do their best for under-achieving students (whatever their background).

How can we change the narrative to get to a more realistic approach, one that supports schools to do their best, but accepts that if we’re serious about fixing the impact of poverty, something else is necessary?

Leave a reply

Your email address will not be published.