How many regulators does it take to tackle deepfake pornography?

And why doesn’t the Online Safety Act cover it?

An explainer as to why UK regulators are struggling to tackle deepfake imagery generated with Grok and posted on X.

Ofcom said yesterday that its investigation of X and the posting of non-consensual deepfake pornography on the platform is a “matter of urgency” yet at the same time, said it was unable to investigate the creation of illegal images by Grok because it did not have sufficient powers relating to chatbots.

Shortly afterwards, the ICO said it was launching its own investigation in conjunction with Ofcom, but specifically into the processing of personal data by the Grok chatbot.

This looks confusing. Why can’t Ofcom, the regulator tasked with enforcing the Online Safety Act, investigate and enforce online safety? What has the ICO got to do with any of this?

Also why did Technology Secretary Liz Kendall say a couple of weeks ago that the government is working on new legislation to make it illegal for companies like xAI to supply tools designed to create nonconsensual sexual imagery? Doesn’t the vast Online Safety Act already cover these harms?

The first two of these questions can be answered by a look at the difference in investigative powers based on the specific laws each regulator enforces. Both are concerned with the harm caused by AI-generated content (specifically the recent controversy surrounding "Grok" and sexualized deepfakes) but their remits differ.

There are two related reasons why Ofcom can’t progress an investigation specifically into Grok and xAI.

  1. The Online Safety Act, from which Ofcom derives its legal powers, is designed primarily to regulate how social media companies handle content shared between users, so the X platform feed is within its remit. If a chatbot like Grok is used in a standalone, private one-to-one interaction, it falls outside the legal definition of a "user-to-user" service.
  2. This means that Ofcom can investigate X as a platform for allowing deepfakes to be shared or on its feed, but it has limited power to investigate xAI for generating that content in a private chat, unless the chatbot is classified as a "search service" or a "provider of pornographic content."

The Information Commissioner’s Office (ICO) derives its power from the UK GDPR and the Data Protection Act 2018, so it has a broader remit. There are three principles underpinning this.

  1. AI models are trained on data and generate images of real people. This is considered "processing personal data" enabling the ICO to investigate xAI and Grok.
  2. The ICO is investigating whether xAI had a legal basis to use people’s images to train the model and whether it built in enough safeguards to prevent that data from being "processed" into harmful, non-consensual imagery. This is enabled under the Lawful Processing part of the GDPR.
  3. The ICO doesn't have to worry about whether content is shared publicly or kept in a private chat to be able to enforce data protection laws. If personal data is being handled, it’s their responsibility to investigate.

The one area where Ofcom can investigate xAI comes under part five of the Online Safety Act, which requires services that publish pornographic content to age gate that content.

The problem facing the government, is that when it comes to chatbot generated pornographic content, the Online Safety Act was basically outdated by the time it eventually became law. Tools like “nudification” apps weren’t developed when the Bill was being drafted.

This is why the government is now seeking to close loopholes that AI companies can exploit. The existing Crime and Policing Bill is being amended to make it a criminal offence to supply AI tools specifically designed or adapted for creating "nudification" or child abuse material.

The government recently amended the Data (Use and Access) Act 2025 to criminalise the creation of non-consensual intimate images. Prior to this, criminalisation relied on the intent to share. Creating the content in the first place was not criminal. It is now.