The UK government’s reliance on artificial intelligence (AI) to make vital decisions in areas from welfare to marriage licensing has come under scrutiny. An investigation by The Guardian reveals how advanced technology, including complex algorithms, is being inconsistently implemented across various Whitehall departments.
According to the investigation, at least eight Whitehall departments and several police forces are resorting to AI, primarily for welfare, immigration, and criminal justice decisions.
Key findings include:
- The Department for Work and Pensions (DWP) employed an algorithm that is believed to have erroneously resulted in multiple individuals losing their benefits.
- A facial recognition tool adopted by the Metropolitan police showed a higher error rate in recognising black faces compared to white ones under certain conditions.
- The Home Office’s algorithm for identifying fraudulent marriages disproportionately targeted specific nationalities.
Artificial intelligence is typically ‘trained’ using vast datasets. If these datasets have inherent biases, the AI can potentially propagate those biases, leading to discriminatory outcomes.
Chancellor Rishi Sunak, while acknowledging AI’s potential to revolutionize public services, also highlighted its implications in controversial scenarios. A notable case is in the Netherlands, where AI was used by tax authorities for identifying potential child care benefits fraud. Erroneous decisions led to a fine of €3.7m and drove numerous families into poverty.
Experts are increasingly concerned about the UK experiencing a similar debacle. The use of poorly-understood algorithms for significant decisions without transparency is a growing concern. The dissolution of an independent advisory board, which monitored public sector AI use, only compounds these worries.
Shameem Ahmad, CEO of the Public Law Project, emphasised the balance between AI’s promise and its potential risks. “We’re at risk of becoming a society where automated systems, perhaps even unlawfully, make impactful decisions, leaving people with no recourse when errors occur,” Ahmad stated.
Marion Oswald, law professor at Northumbria University and former member of the government’s data ethics advisory board, pointed out the general lack of clarity and transparency in public sector AI usage. “These tools can significantly affect many, yet there’s little understanding or ability to challenge their decisions.”
In response to these concerns, the Cabinet Office introduced an “algorithmic transparency reporting standard” urging departments and police authorities to disclose AI use that might significantly impact the public. Six entities have disclosed projects under this standard.
As AI’s role in public decision-making becomes more prominent, the calls for transparency, accountability, and unbiased systems are growing louder. The looming international summit on AI safety at Bletchley Park, spearheaded by Sunak, aims to set global terms for AI development, focusing on potential threats posed by advanced algorithms.