It’s complicated

Posted on 8 March 2019

Jonathan Haslam, Director, IEE

Griphook: How did you come upon that sword?

Harry Potter: It’s complicated. Why did Bellatrix Lestrange think it should be in her vault?

Griphook: It’s complicated.

Last week saw the publication of the latest four reports from projects that received IEE Innovation Evaluation Grants.

Part of the Research Schools Network project, these pilot evaluations of innovations in teaching and learning approaches support the network’s goal of improving the attainment of pupils by increasing the use of evidence-based practices.

The story that emerges from these reports is not that a particular intervention worked or didn’t work. For one thing, these projects are all fairly small scale, and it would be wrong to draw definitive conclusions.

But perhaps more interestingly, these latest reports have shown that, even across a small number of subjects, the same approach doesn’t appear to have the same impact.

It can be tempting to apply an approach class-wide, department-wide, school-wide, region-wide and ultimately nationwide. Yet most of these latest reports show how the impact can vary in different situations:

  • In Audio Feedback, the approach was effective in both sociology and maths. Sociology teachers and students preferred using audio feedback, but maths and sociology teachers didn’t. Clearly this would influence any further scaling up of the approach.
  • In the Desk Cycle study, the desk cycles helped reduce levels of hyperactivity for pupils who had previously shown high levels of hyperactivity. But there was evidence that for students who had medium or low levels of hyperactivity, using the desk cycles led to a negative impact on maths attainment. This suggests that it would be unwise to introduce desk cycles for everyone.
  • And in Improving reading fluency, the intervention (designed for Year 6) had less impact in Year 4. Not surprising, perhaps, but how often have interventions been applied wholesale to groups of pupils they were not originally designed for? Lab research is often carried out on undergraduates, but the results then uncritically applied to younger students, for example.

All of this illustrates a need for caution when applying the learning from research indiscriminately. Or, indeed, applying any approach universally. Implementation requires adaptation to local circumstances, but how do we avoid this being an excuse for doing the same old thing that we have always done? Teachers and schools, beleaguered by the introduction of new policy after new policy, are adept at jumping through hoops, demonstrating superficial compliance, or repackaging a new thing as an old thing they have always done. So how do we know that we’ve improved?

Thankfully, these innovation evaluation projects point at a possible answer. Using the structure of an evaluation helped to make the introduction of a new approach more robust. I’d argue that the schools in these projects now have a better idea of whether or not (and how) these projects “worked”. The steps that helped them to identify this included:

  • Being clear about the “problem”. What was the precise issue that needed a solution?
  • Being clear about the approach might help. Why would this approach work these children, and how? What, exactly, is the approach?
  • Having a clear comparison group.
  • Objective pre- and post-intervention measures that are fair to both groups and set and marked as “blind” as possible (ie, by someone who doesn’t know who has, or hasn’t, received the new approach).
  • Monitoring the way that the approach is implemented (was it as intended, and what did staff and pupils think of it?).

The ways that innovation, evaluation and improvement intersect is something that we will be exploring further in the coming months. We will be publishing more reports throughout the year, and if you would like to be notified when these become available you can sign up to receive email alerts here.

Leave a reply

Your email address will not be published.