ICO raps Snap over AI chatbot privacy

Snap did not 'did not adequately assess the data protection risks,' regulator says

ICO raps Snap over AI chatbot privacy

UK data protection watchdog the Information Commissioner's Office (ICO) has sent a preliminary enforcement notice to Snap, over a "potential failure to properly assess the privacy risks" posed by its generative AI chatbot My AI.

My AI is a chatbot offered to Snapchat+ subscribers that the company advertises as being able to "answer a burning trivia question, offer advice on the perfect gift for your BFF's birthday, help plan a hiking trip for a long weekend, or suggest what to make for dinner."

It says that content shared with My AI, including location data, will be used to "provide relevant and useful responses to your requests, including nearby place recommendations. Your data may also be used by Snap to improve Snap's products and personalise your experience, including ads."

Content shared with My AI is stored until the user deletes it.

The ICO, which has been investigating the app, has said that Snap did not "did not adequately assess the data protection risks posed by the generative AI technology, particularly to children," before the UK launch of My AI in February. It has issued a preliminary notice to the company, to which Snap must respond.

If the social media company fails to adequately address the regulator's concerns, My AI could be banned in the UK.

In a blog post, Information Commissioner John Edwards said: "The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching My AI.

"We have been clear that organisations must consider the risks associated with AI, alongside the benefits. Today's preliminary enforcement notice shows we will take action in order to protect UK consumers' privacy rights."

We have contacted Snap for comment.

Commenting, Matt Worsfold, risk advisory partner at London legal firm Ashurst LLP said the preliminary enforcement notice highlights the increasing regulatory focus on issues surrounding generative AI. "In particular, it raises important questions regarding the control over data inputs and outputs into large language models, and the ability for organisations to meet their data protection obligations and safeguard the privacy rights of individuals."