Ofcom contacts X over reports Grok AI generates sexualised images of children

Authorities in France, India and Malaysia are also reported to be assessing the situation

The media regulator Ofcom has made "urgent contact" with xAI after reports that its chatbot, Grok, can be used to generate sexualised images of children and digitally undress women without their consent.

A spokesperson for Ofcom said the regulator was seeking further information from the company as scrutiny intensified around the design and safeguards of the tool.

"Based on their response we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation," the spokesperson added.

Grok is a free virtual assistant, with some premium features, that responds to user prompts when it is tagged in posts on X.

The BBC reports that it has seen multiple examples on X, the social media platform formerly known as Twitter, where users asked Grok to alter real photographs to place women in sexualised scenarios.

In many instances, the images appeared to depict real individuals who had not consented to their images being altered.

In recent days, users on X have also expressed concerns about explicit content involving minors, including images of children wearing minimal clothing that they said had been generated using Grok.

The concerns have amplified since X introduced an "Edit Image" button, which allows any user to alter photos using text prompts, even if they did not upload the original image and without the consent of the person shown.

On Sunday, X warned users that Grok must not be used to generate illegal content.

Musk also posted publicly that anyone who asks the AI to create illegal material would "suffer the same consequences" as if they had uploaded such content themselves.

xAI's own acceptable use policy bans the pornographic depiction of identifiable individuals.

Global concerns

Regulatory concern is not limited to the UK.

The European Commission said on Monday that it was "seriously looking into the matter", with authorities in France, India and Malaysia also reported to be assessing the situation.

In the UK, the Internet Watch Foundation (IWF) said it had received reports from members of the public relating to images generated by Grok on X.

However, it told the BBC that it had not yet seen images that crossed the legal threshold to be classified as child sexual abuse imagery under UK law.

Under the UK's Online Safety Act, creating or sharing intimate or sexually explicit images — including AI-generated deepfakes — without a person's consent is illegal.

The legislation also requires technology companies to take "appropriate steps" to reduce the risk of users encountering such material and to remove it quickly once they are made aware.

Dame Chi Onwurah, the Labour MP who chairs the Commons science, innovation and technology committee, described the reports as "deeply disturbing".

She said the committee had found the Online Safety Act to be "woefully inadequate".

Concerns about manipulated and harmful online content have grown rapidly since the launch of ChatGPT in 2022, which helped trigger a surge in AI image-generation platforms. The technology has also contributed to the rise of tools that produce non-consensual deepfake nude images of real people.

David Thiel, a trust and safety researcher formerly with the Stanford Internet Observatory, told CNBC that US law generally outlaws the creation and sharing of certain explicit images, including child sexual abuse material and non-consensual intimate images.

However, he said legal determinations around AI-generated images can hinge on the specific details of what is created and shared.

"There are a number of things companies could do to prevent their AI tools being used in this manner," Thiel said.

"The most important in this case would be to remove the ability to alter user-uploaded images. Allowing users to alter uploaded imagery is a recipe for NCII. Nudification has historically been the primary use case of such mechanisms."

Computing says:

Individual Grok users have been using it to create non-consensual deepfake pornography for days now. There are also multiple reports of images of CSAM. If any other entity were hosting a website featuring this kind of imagery, Ofcom’s “urgent contact” would be reinforced by police action and arrests.

The lack of curiosity and action by the companies making AI tools, by the bodies responsible for enforcing the law and by states about the use of this technology to degrade, humiliate and harm women and children is shameful. The most likely outcome from Ofcom’s urgent request for further information from X is that it will be ignored.

Then what?