The system or the student?
This blog is written with the benefit of hindsight and the luxury of not being accountable. I don’t pretend to have warned this would happen or have any easy answers.
The problem this year was that we tried to act as if exams were being sat when they weren’t. A system was then set up to try and replicate the overall pattern and feel of an examination year, with only a fairly loose relationship to the performance and potential of individual pupils.
Actually, this isn’t exactly unique to this year. Due to the desire to keep the overall results looking ‘right’ there is usually a large volume of error in individual results. The difference is that people are inclined to believe the exam result over teachers’ or schools’ predictions. This year, there were no exam results, and the algorithms used were applied to very imperfect proxies. But there is always an algorithm.
Long before this year, we chose as a country to prioritise getting a set of results that look neat and believable in terms of headline numbers over accuracy (and fairness) in terms of individual pupils. This is a classic case of prioritising the system over the people within it. Perhaps not too dissimilar to how we dealt with a different form of inflation back in the ‘80s. We saw this in action during the English GCSE fiasco of 2012. The choices made during the crisis of Covid19 have accentuated an existing problem to extreme proportions.
Now is a good time to reflect whether this is the right design principle.
There is value in the integrity of the system, it gives currency to the individual results, but only up to a certain point. And it’s likely that that confidence has been undermined in any case.
There is an argument now in favour of generosity to the individuals inside the system. People have worried, understandably, about how this year’s results will be compared to last year’s and next year’s, as different cohorts compete in the job market. That is now a moot point. No one will ever believe that the 2020 results are comparable to any other year. That ship has sailed.
Our priority should therefore be access: how we do we ensure that each young person gets to the best destination they had waiting for them? This cohort have lost so much with Covid, let’s not take their plans and aspirations away from them too. What does generosity cost us? There is no system left to protect this year.
This could be achieved in a number of ways: we could refer to centre assessed judgements. These are more generous, because teachers see potential over time where exams are only performance in the moment, but they are not random or baseless.
We could also work to ensure that existing offers are honoured and help institutions to do so. This may involve helping universities and other destinations to support more places. So what if we have more students in the year ahead, it’s not the end of the world. I am not sure why getting a few more than normal is worse than getting a few less.
These measures need to be applied retrospectively to A levels and prospectively to GCSEs. If that work cannot be done in the time remaining for GCSEs, there is an argument for delaying their announcement.
There is urgent work to be done as young futures hang in the balance. But there are longer term lessons to be learned.
When you use a new model of any complexity, you never really know what the results will be. It seems that, by relying on centre assessment for smaller groups and statistical adjustments for larger groups, the model is more generous to people in smaller groups. Private schools students are more likely to be in smaller groups.
It is reasonable to accept that this was hard to predict, especially with a small group of people working rapidly in a crisis. This is a strong argument for ensuring all algorithms and models are open and transparent and published long in advance. There are many smart people with strong motives in schools and colleges across the land, and in organisations around them, who can stress test any algorithm and discover and correct any weaknesses. The current lack of trust between professionals and government is now revealing some of its costs.
More broadly, we operate education (and have done for many years) with an unspoken principle. That is, that individual inaccuracy is a price worth paying for confidence in and the stability of, the system. It is a consensual fiction we all participate in. It’s not really any different from the tacit acceptance of grade inflation that went before and was so derided. Neither set of outcomes really reflected what is happening inside the system but they make different groups of people feel better about it. Children do not always get the results they deserve, but things feel robust and rigorous. Surely the time has come to challenge the principle.
It piles disadvantage on the already disadvantaged. They need the result, that’s their objective passport, when everyone is subjectively judging their background, their class, their family. The more privileged can cope with errors: they can appeal, resit, lobby or use their networks to explain the error and get where they need to go.
We need to design a system of assessment that ensures those who most need it get the most accurate results. This should be principle number one. And of course, the results are only the product of all the work that has gone before: teaching and study stretching back years. If you want fair results, you need more than equal inputs. You have to weight resources to counterbalance the unfairness elsewhere.
The use of prior attainment as one of the driving forces in the exam system is particularly problematic. It acts to erase the possibility of confounding expectations. And who does that benefit? Those for whom we already have high expectations. How does a student or school get to prove us wrong and leave their history behind?
Can we design a system that puts fair results for those who most need them ahead of other considerations? I am not an assessment expert. It is a complex specialism, but it can’t be beyond our grasp. I believe in exams as a potential tool for social justice as they do not depend on judgement. But that is not their current effect.
When proposing remedies, we should be wary of solutions which work for a pandemic year but will be less than ideal in normal years. Yes, coursework or modular exams would be more resilient to cancellation. But they also favour the privileged in their own way. Remember, the problem this year was not that we had exams. It is that we didn’t have exams. All that was left was the system we used to adjust and shape the results of the exams – and thus its effects were impossible to hide.