Algorithms don’t fail people, people do

The inequality in this year’s A-level results has been strongly linked to the performance of an algorithm – the statistical model that the government used to ‘automatically’ upgrade or downgrade results for pupils. While ministers will be called to question, for many it will be the cold, faceless, automated algorithm that is seen as the problem. We, as liberals, must be clear: the A-level disaster is not a programming error; algorithms merely reflect or even enhance the bias of their designers.

The Labour Party has said the A-level algorithm was ‘unlawful’, the FT has described how ‘the algorithm went wrong’ and clearly the process had massively unfair outcomes. Yet, this wasn’t data science gone rogue like Terminator’s Skynet or the ineffable AI in Ex Machina: this was a political choice reflecting political biases.

When the government’s advisory body got their Covid19 models ‘wrong’, it wasn’t because of algorithms but rather a lack of diversity  – both in thought and in demographics – within the advisory team. The algorithms and models weren’t ‘wrong’ and they certainly weren’t the reason for e.g. people of Bangladeshi origin being disproportionately affected by the disease – instead, the problem was in the inherent bias of a skewed group of designers.

In the case of this A-level algorithm, it made good statistical sense to include the historical performance of schools in the prediction model because that minimises the total error across all results – intuitively we know that pupils at exclusively funded Eton get more A* results than pupils at woefully under-resourced Scumbag College. And yet, it goes against every British and liberal value of fairness and equality of opportunity to downgrade the brightest lights from a poorer performing school as part of a mathematical rounding exercise.

Whether we are talking about the life and death of public health or the life-opportunities of our children, the issue isn’t whether algorithms ‘go wrong’ but whether we have fairness and diversity in the teams that build the algorithms – and the ministers signing them off.

The Education Secretary, Gavin Williamson, was warned as far back as July, about the problems with this process but decided to go ahead with it anyway. He also saw the forerunner of a u-turn from the Scottish government after similar accusations of unfairness yet still he backed the algorithm and its outcomes – and, in the case of small (mostly private school) classes, he supported not using the algorithm at all and having grade inflation for the privately educated.

This is not the fault of some cold, faceless bit of technology; this is a clear choice by those in power.

Top Tory advisor Dominic Cummings has spoken about how, in his experience, the Tories don’t care about poorer people or the NHS – did we see this in the A-level algorithm’s design? Is this what happened when Gavin Williamson approved an unfair and unequal outcome?

Perhaps a more pressing question is, will we spot it when it happens again? These biases and political decisions came to light for A-level results because they affected a very large and broad group of our children. Parents of all sorts, pointy-elbowed or otherwise, were acutely aware and raising merry hell. But what will happen when another algorithm, designed by a biased team and approved by a biased minister, negatively impacts a smaller or more hidden group?

Will we even know that the Home Office has designed an algorithm that is sending people to detention centres? Or that the DWP has an algorithm that is stopping benefits for a small group of wrongly suspected cheats? These things will likely go unnoticed by the public, silently approved by departmental ministers and an increasingly hollowed-out and subservient civil service.

We don’t need a ‘regulator for algorithms’ but we should have a regulator or ombudsman for each of our government’s increasingly illiberal ministries.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *