What can we measure?
Jonathan Haslam, Deputy Director, IEE
This was the topic that I took for a #UKEdResChat I hosted. (Thanks to Karen Weispeiser and Rob Webster for the invitation and support.) It’s a subject that’s been bothering me for a while. Here’s why:
- Where are the measures?
In my mind, I have quite a simple view of education research projects.
- You have a problem.
- You come up with an idea for fixing it.
- You try your new approach.
- You see whether it has made things better or worse.
- If it’s better, you do the experiment again, only on a larger scale.
(Let’s leave aside for now research that is describing or exploring a particular issue, rather than trying to improve outcomes for children.) To know that you have made a difference, you need valid, independent measures. Validity basically means that your measure does indeed measure what it’s supposed to. Measures should also be independent of the new thing that you’re doing, so that there’s no inherent benefit.
(Say that you are trying a new way of teaching children to cook. You teach one group in the new way to make apple crumble. You teach another group in the old way to make spaghetti carbonara. You test both groups on their ability to make apple crumble. This is clearly an unfair test. It is surprising, though, how many studies take a similar approach to measurement. Researcher-designed assessments are often a bad sign)
You would therefore expect that there’s a handy list of these measures that you can choose from, possibly sorted by age group, subject, etc. So where is it?
Searching in advance of the Twitter chat, I found some sources in the US (here, for example), but nothing in the UK. I did find a helpful ONS review of measures for children’s well-being, but, perhaps symptomatic of this whole area, I think it’s now been archived.
A collection of measures that were valid and independent (and preferably free) would be hugely useful to both academic and practitioner researchers. For example, I am sure that schools thinking of putting together proposals for our innovation evaluations would appreciate a straightforward database they could search.
- What should we measure?
Of course, there are individual measures that researchers use. For example, the Woodcock–Johnson Tests of Cognitive Abilities or the Strengths and Difficulties Questionnaire. Costs range from free to cheap to expensive. The expense of delivering tests encourages many (including some EEF studies) to rely on statutory assessments (SATs, GCSEs, etc) that are being carried out anyway.
The problem with this is that your choice of measures is limited – usually to the impact on (core) curriculum subjects. This risks academic achievement becomes the only measure. Other measures of the impact that schools have on children (eg, social-emotional, non-cognitive, health) or intermediate outcomes that may be important on the way to academic achievement (eg, behavioural, attendance) might be lost.
In a perfect world, what if we agreed on a standard set of information that we collected about children, at particular stages of their school career. It would allow us to have objective profiles of each child (remember, teacher assessments are prone to bias) that would inform the most appropriate interventions (even, one day perhaps, genetic scores). It would also allow the impact of small-scale (and larger) intervention projects to be measured more easily.
Pipe dream? Probably. The almost inevitable use of this data for accountability, regardless of what we might intend, would wreck it. The problems of developing measures that are valid and independent, and also un-game-able (remember the fuss about coming up with a tutor-proof 11-plus exam) would be too much. In particular, soft skills are difficult to measure and any whiff of accountability would mess things right up.
It might be a useful exercise, nonetheless, for schools to think about the data that they hold on each child and whether, or not, it provides an objective profile that helps them to provide each child with the best education they can.