ICO unveils comprehensive strategy to regulate AI and biometrics

Aims to strike a balance between enabling innovation and safeguarding the rights of individuals

The AI and Biometrics Strategy will will prioritise oversight in areas where there is both significant potential for public benefit and heightened risk of harm such as in police facial recognition systems.

The UK Information Commissioner's Office (ICO) has released a comprehensive new strategy outlining how it will regulate AI and biometric technologies, focusing on high-risk use cases such as police facial recognition and automated decision-making (ADM) systems in public services and recruitment.

Published last week, the AI and Biometrics Strategy aims to strike a balance between enabling innovation and safeguarding the rights of individuals.

The ICO said it will prioritise oversight in areas where there is both significant potential for public benefit and heightened risk of harm.

Among the regulator's key actions are the development of a statutory code of practice for AI use, audits and legal guidance on police deployment of facial recognition technology, and setting "clear expectations" for the use of personal data in training generative AI models.

"The same data protection principles apply now as they always have - trust matters, and it can only be built by organisations using people's personal information responsibly," said Information Commissioner John Edwards.

The strategy sets out a proactive agenda. The ICO will:

The ICO acknowledged rising concerns around fairness, transparency, bias, and accountability, stating that these areas would form the core of the regulatory focus.

The strategy's launch was backed by key political figures who warned of the societal shifts AI is ushering in.

Despite the growing influence of AI and biometric systems, UK adoption remains cautious.

According to the ICO, only 8% of organisations reported using AI decision-making tools involving personal data in 2024, and just 7% used facial or biometric recognition, both figures increasing only marginally over the previous year.

The ICO believes this limited uptake reflects broader public scepticism. Concerns are particularly acute regarding the use of biometric surveillance in law enforcement, algorithmic tools for recruitment, and AI systems used to determine eligibility for welfare benefits.

The regulator warned it would use its enforcement powers where necessary.

The ICO's strategy follows repeated warnings from independent oversight bodies and legal experts about the shortcomings in current regulatory frameworks for biometric surveillance in the UK.

A recent analysis by the Ada Lovelace Institute discovered significant gaps in the country's approach to governing biometric technologies.

While its review focused primarily on law enforcement use of live facial recognition (LFR), it called for comprehensive legal clarity across public and private sectors.

Parliamentary committees, civil society organisations, former biometrics commissioners, and legal experts such as Matthew Ryder QC have all called for new legislation.

The Lords Justice and Home Affairs Committee alone has conducted three inquiries into biometric surveillance and algorithmic policing in recent years.

The House of Commons Science and Technology Committee called for a moratorium on LFR as far back as July 2019.

The ICO said it would continue engaging with developers, public bodies, and civil society groups to ensure that AI and biometrics work in the public interest-an increasingly pressing goal as these technologies become embedded in everyday life.