Pilot programme uses personal data to assess future threat levels among former offenders
The UK government is piloting a new artificial intelligence system designed to flag individuals at high risk of committing serious crimes. The initiative, still in its early research stages, seeks to improve public safety by analysing the behavioural and personal data of those with prior convictions.
Named “Sharing Data to Improve Risk Assessment,” the project brings together information from police forces and probation services. By feeding historical data—such as criminal records, mental health status, substance abuse history, and past suicide attempts—into predictive algorithms, officials hope to identify potential high-risk individuals before future crimes occur.
Concerns grow over data ethics and civil rights
Despite its preventative goals, the programme is already facing criticism from human rights groups and privacy advocates. Detractors argue that the model could entrench systemic bias and disproportionately target marginalised communities, especially given the deeply personal nature of the data being used.
Critics also fear that such predictive tools may lead to a form of preemptive surveillance that undermines fundamental legal principles, including the presumption of innocence and the right to privacy.
Amnesty International, in a February 2025 statement, urged the government to abandon predictive policing tools altogether, citing their “unreliable nature” and the high risk of discriminatory outcomes.
Science fiction turns real for some rights groups
The project has drawn comparisons to dystopian science fiction narratives, with some observers likening it to scenarios in which authorities intervene based on predicted—not actual—behaviour. While government officials maintain the system is intended purely for risk evaluation and support services, critics remain wary of its long-term implications.
Balancing safety and freedom in the age of AI
While proponents argue the technology could support better decision-making in the criminal justice system and reduce recidivism, the ethical balancing act remains delicate. The UK is far from alone in pursuing such technologies, as governments globally experiment with AI for public safety purposes.
Still, as the country charts new territory in digital policing, the central dilemma persists: how can society protect itself from future harm without compromising the rights of those it seeks to watch?
The answer, according to civil society groups, may depend less on the technology itself and more on the safeguards, transparency, and public oversight that govern its use.