Information Commissioner: ‘I am worried about some AI innovations’

Regulators need to keep a close eye on automated decision-making systems, warns John Edwards

Image:
John Edwards: ‘There’s a lot of snakeoil’

The Information Commissioner, John Edwards, has said he is concerned about the pace of change of technology and the way that some harmful technologies can become entrenched.

Speaking at the IAPP’s Data Protection Intensive event in London on Wednesday, Edwards said the data protection watchdog maintains a watching brief over technologies and innovations that are likely to become mainstream in the next 2-7 years, in order to be able to guide innovators and mitigate harms before they arrive.

Asked about the rapidly growing field automated decision making, Edwards saw some issues of concern around bias. “I am worried about some AI innovations being deployed into recruitment, for example, because they can exclude people.”

‘Digital phrenology’

As well as the risk to marginal groups, there is little evidence that many of these products and services actually work, with uptake being driven largely by spurious marketing claims, he said.

“There is, let's admit it, snake oil on the market, right? We are entering an age of a kind of digital phrenology, where great promises are made about emotional recognition systems, for example, which can prejudice against the neurodivergent, for example, or people who come from different ethnic or language communities.”

Other areas of concern are around predictive profiling being adopted by police forces to identify potential criminals, and systems to predict those likely to suffer child abuse or domestic violence.

Regulators must be prepared to step in to investigate these sorts of products and services as they arrive, because commercial interests mean they can build up a momentum of their own, he said.

“I am worried about the confidence of vendors in products that are untested, particularly in high-risk areas like profiling. Too many people have too much invested in them to break them away,” he said, adding: “There is still a huge industry in lie detection in the US, despite the fact that polygraphs have been widely shown to be hokum.”

On age assurance

According to ICO research, 42% of parents feel they have little or no control over data collected by social media platforms on their children. Last week the regulator announced probes into Reddit, Imagur and TikTok to assess whether these platforms comply with data protection laws when processing personal information belonging to users aged 13 to 17, and to examine their age assurance measures.

Asked whether, given advances in age assurance technology and the advent of the Online Safety Act, the government would likely require more hard age-gating measures, including enforcing an age threshold of 13 for some services, Edwards said he doubted it, adding that the ICO is working closely with Ofcom on the matter.

“The market is underdeveloped,” he said, “We're better at differentiating between under and over 18, and there are instruments in the market that allow people to assure their age over 18. But it's much more difficult at that 13 point. I think there are still tremendous challenges about telling the difference between someone who's 12 1/2 from someone who's 13 1/2.”

Instead, he said most services would continue to be judged on best endeavour efforts, being able to demonstrate that they are taking appropriate measures to protect children.