Misuse will be punished: UK regulator warns against AI complacency

'The same rules apply as they always have done'

John Edwards. Source: ICO

Image:
John Edwards. Source: ICO

The UK Information Commissioner John Edwards has warned organisations against complacency when implementing AI saying they must consider privacy risks and protect people's personal information.

Speaking at an event organised by industry body techUK yesterday, Edwards said that organisations should be wary of getting caught up in all the excitement around the technology, which, if wrongly implemented, presents a serous privacy risk: "There is no regulatory lacuna here, the same rules apply as they always have done."

It could also wipe out many of the potential benefits of the technology, he went on, referring to a recent survey in the US that found that that people are becoming markedly less trusting of AI.

"If people don't trust AI, then they're less likely to use it, resulting in reduced benefits and less growth or innovation in society as a whole," Edwards said. "This needs addressing - 2024 cannot be the year that consumers lose trust in AI."

He warned that "bad actors" who abuse personal data to gain an unfair advantage will not be tolerated, referencing Clearview AI, the facial-image scraping company that the Information Commissioner's Office (ICO) fined £7.5 million for abuse of personal data (a fine that was later overturned on appeal).

"Non-compliance with data protection will not be profitable," Edwards promised. "Persistent misuse of customers' information, or misuse of AI in these situations, in order to gain a commercial advantage will be punished."

Among punishments available to the data protection regulator are fines "commensurate with the ill-gotten gains achieved through non-compliance", orders to stop processing data and delete any data that was already gathered improperly. The UK Data Protection Act also allows for criminal prosecution for serious or intentional data violations.

He urged organisations to work with the ICO through its Sandbox and Innovation Advice service, to "stress test" products and services before launching them, or with other regulators including Ofcom, the Financial Conduct Authority (FCA) and the Competition and Markets Authority (CMA).

"There are no excuses for not ensuring that people's personal information is protected if you are using AI systems, products or services," he said.