UK Accused of Delaying Deepfake Law as Outcry Grows Over Grok AI Images

As generative AI outpaces regulation, victims say delays in enforcing the law leave them exposed to abuse on mainstream platforms

UK Accused of Delaying Deepfake Law as Outcry Grows Over Grok AI Images

Campaigners are accusing the British government of moving too slowly to enforce a law that would make it illegal to create non consensual sexualised deepfake images, as anger mounts over the use of Elon Musk’s Grok AI to digitally remove clothing from photographs of women.

The criticism follows reports that Grok, an artificial intelligence tool integrated into the social media platform X, has been used to generate sexualised images of women without their consent. One woman told the BBC that more than 100 such images had been created of her, leaving her distressed and reluctant to continue using the platform.

While it is already illegal in the UK to share sexual deepfakes of adults without consent, legislation passed in June 2025 that would criminalise the creation or commissioning of such images has yet to come into force. 

The delay has prompted concern from legal experts and advocacy groups, who say the gap leaves victims exposed to abuse that is increasingly easy to carry out using generative AI tools.

It remains unclear whether all of the images produced using Grok’s so called unclothing feature would be covered by the new offence once implemented. The BBC has contacted the government for comment.

In a statement, X said: “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

The controversy has drawn condemnation from the Prime Minister, Sir Keir Starmer, who described the sexualised images produced by Grok as “disgraceful” and “disgusting.”

“It’s not to be tolerated,” he said in an interview with Greatest Hits Radio. “X has got to get a grip on this, Ofcom has our full support to take action in relation to this. This is wrong.”

Grok can be accessed through a standalone website and app, or by tagging “@grok” in posts on X.

Initially marketed as a conversational assistant, the tool has recently gained image editing capabilities that allow users to alter photographs uploaded to the platform, including by placing people in sexualised poses.

Andrea Simon, the director of End Violence Against Women, said the government’s failure to bring the law into force had “put women and girls in harm’s way.”

“Non consensual sexually explicit deepfakes are a clear violation of women’s rights and have a long lasting, traumatic impact on victims,” she said. “For women using platforms like X, the threat of this abuse can also mean they feel the need to self censor and change their behaviour, restricting their freedom of expression and participation online.”

She added that the issue extended beyond criminal law. “This is not solely a criminal justice issue but an issue of regulating a tech ecosystem that facilitates and profits from violence against women and girls.”

Pressure on the platform intensified this week after the Technology Secretary, Liz Kendall, called on X to “deal with this urgently,” describing the situation as “absolutely appalling.” The media regulator Ofcom said it had made “urgent contact” with X and xAI, the company behind Grok, and confirmed it was investigating the concerns.

Downing Street has backed Ofcom’s involvement, with the Prime Minister’s spokesperson saying that “all options remained on the table.”

The Ministry of Justice said existing laws already criminalise the sharing of intimate images, including deepfakes, without consent. “We refuse to tolerate this degrading and harmful behaviour,” a spokesperson said, “which is why we have also introduced legislation to ban their creation without consent.”

Under current law, generating pornographic deepfakes is illegal when they are used in cases of so called revenge porn or when they depict children. A provision in the Data (Use and Access) Act 2025 goes further by criminalising the creation or commissioning of “purported intimate images,” according to Lorna Woods, a professor of internet law at the University of Essex.

Professor Woods said the offence “would seem to be a good fit for some of the images that have been created using Grok,” but noted that the relevant provision has yet to be brought into force.

A year after ministers first announced a crackdown on sexual deepfakes, campaigners and legal experts say the absence of secondary legislation has left a critical loophole. Simon questioned why the necessary legal steps had still not been taken, given that such images are “a clear violation of women’s rights.”

“This law has still not come into force,” she said, “nor has a date been set for when this will take place.”

Baroness Owen, a Conservative peer who pushed for the legal changes in the House of Lords, said the government had “repeatedly dragged its heels.”

“We cannot afford any more delays,” she said. “Survivors of this abuse deserve better. No one should have to live in fear of their consent being violated in this appalling way.”

Baroness Beeban Kidron, a crossbench peer, said the pace of technological change made delay indefensible. “Technology moves fast, and this legislation is supposed to plug an existing gap,” she said. “There is no excuse for delay.”

Several women told the BBC that their images had been altered by Grok after they posted photos of themselves on X. In many cases, users tagged the chatbot beneath the images with prompts asking it to undress the subject or place her in a sexualised scenario.

Evie, one of the women affected, said she first noticed people asking Grok to put her in a bikini several months ago. She said a recent update had made it easier to create such images and that the results appeared “more realistic.”

She estimates that at least 100 sexualised images of her have been generated using the tool. She has stopped reporting them all, she said, because of the “mental strain” involved in repeatedly viewing the images.

The impact extends beyond the platform itself. “My family follow me on there, my friends, my co workers,” she said. “Knowing that all the people I care about in my life can see me like that… it’s disgusting.”

Dr Daisy Dixon, another X user, said she had also seen a rise in the use of Grok to undress her images, particularly her profile picture. She said the experience left her feeling “humiliated” and described the act of Grok posting the altered image back to her as deeply violating.

“To have that power move of posting it back to you, it’s like saying ‘I have control over you and I’m going to keep reminding you I have control over you,’” she said. “We don’t want to dilute the concept, but it feels like a kind of assault on the body.”

For women like Evie, the issue now is urgency. “There’s so many places online that you can do this, but the fact that it was happening on Twitter with the built in AI bot, this is crazy this is allowed,” she said. “Why is this allowed and why is nothing being done about it?”

Get the latest news and insights that are shaping the world. Subscribe to Impact Newswire to stay informed and be part of the global conversation.

Got a story to share? Pitch it to us at info@impactnews-wire.com and reach the right audience worldwide


Discover more from Impact AI News

Subscribe to get the latest posts sent to your email.

Scroll to Top

Discover more from Impact AI News

Subscribe now to keep reading and get access to the full archive.

Continue reading