How do we know how well we are doing?
Thursday, August 20, 2015
As the old saying has it, we value what we can measure, and we don’t always measure what we value. We all know that quantitative measures can be deeply flawed and therefore misleading, but we are all seduced by the simplicity of a ranking or a score – or even worse a pretty graphic – and often assign it more meaning than it can comfortably hold.
The reality behind the old saw above is that measuring children in any way except the most basic physical measurements – height and weight – is a process fraught with difficulty. Measuring learning is difficult enough and can result in simplistic tests of spelling or reading, or children being “taught to the test” generally: “What is it that I need to know to pass this English Literature GCSE about Julius Caesar?”
Every now and then (I really mean continually…) we have ministers and officials tinkering with the assessment and examination system, and complaining that teachers and schools “game the system”. That’s surely not surprising. Assessments and examinations are “high stakes” both for the child and the school (though I recall, as an education director, telling all our children not to get worked up about their Key Stage 2 tests as they would be forgotten inside a few months – the same is true of GCSEs if you go on to A-level, and of A-levels when you are at university – and degrees once you are in the world of work). We must expect intelligent teachers to do their best to help their students achieve the highest standards, with positive outcomes for the school and the teacher. It’s worth remembering that headteachers have received knighthoods and damehoods for improving standards, so the stakes are really high! But looking closely at what has really happened can reveal that the choice of courses, alongside some overt and covert selection, can have a huge impact. The question is whether these choices are in the best interest of students.
But this discussion pales into insignificance when you consider the many things that are not assessed, tested or reported on. Teamwork, socialisation, practical problem-solving, practical communication, political understanding, personal finance skills – not to mention, for example, the intangible benefits of formal education – a love of reading, the ability to converse, a deep understanding of the physical world, a love of number for its own sake, a love of physical activity.
So… we value what we measure, and we don’t always measure what we value. And too often we fall into the trap of measuring what is easily measured, whether or not it is what we value.
Is this, then, a cry for moving away from measurement and performance management? Absolutely not, we need quantitative ways of assessing how we are doing – whether that is an a student, a teacher, a school, a local authority, an academy chain, or the country, and whether we are talking about “academic” achievement, parenting, socialisation, abuse, or anything else that matters to us. If we can’t answer these questions then we can’t work to improve:
“How do we know how well we are doing?”
“How well are we doing compared with others?”
So we do need measures, and we do need to make sure they assess what we value – what we want children and young people to achieve.
And in a world of public accountability we ought not to know the answer to these questions without making it public.
The problems come when the metrics become high-stakes – when professional can, and do, routinely lose their jobs if things go wrong. I’m not saying that gross underperformance should not be dealt with, but there must be room for professionals to improve, and an expectation that they will work to improve. The problem with relying only on high-stakes measures is that their operation distorts the whole system, and that there are clear and direct incentives to act inappropriately in the context of the whole child.
I write as chair of the National Consortium for Examination Results, NCER, and as chair of a governing body, and as you can tell I’m conflicted about all of this. But I’m quite clear that NCER should make available the best possible data – accurate, timely, and clearly presented – and that the whole sector should think carefully about what these data actually mean, and what action should follow.
One question that is asked every year is whether we are above or below the GCSE floor targets … no doubt it will be asked next week. But the real question is what we can do to continue to improve the education of our students, and GCSE scores are only a small part of that.
Neither knee-jerk judgements nor fatuous statements about requiring every school to be above the average do none of us, including children and young people, any favours.
John Freeman CBE is a former director of children's services and is now a freelance consultant