Ofcom is investigating Elon Musk’s platform, X, over worries that its AI tool Grok is creating inappropriate images. Reports indicate that Grok may be used to generate sexualized pictures, including harmful content involving minors.
If X is found in violation, Ofcom could impose hefty fines, up to 10% of the company’s global revenue or £18 million—whichever is higher.
X has responded, stating that those using Grok for illegal content will face penalties similar to uploading illegal images directly. Musk suggested that the UK government is looking for reasons to regulate online content excessively, pointing out that other AI platforms are not facing similar scrutiny.
Recent reports have shown troubling examples of Grok-generated images, where women’s images were altered without consent. One woman shared that over 100 sexualized images had been created using her likeness.
If X doesn’t comply with Ofcom’s requests, the agency could seek a court order to restrict access to the platform in the UK. Technology Secretary Liz Kendall expressed the need for a swift investigation, emphasizing the urgency for victim protection.
Former Secretary Peter Kyle labeled Grok’s deployment as “appalling” and highlighted instances where AI misused images of vulnerable individuals. Other political figures, like Northern Ireland’s Cara Hunter, have also voiced concern and indicated they would leave the platform.
Downing Street stressed their commitment to protecting children online, stating they would continue to review the government’s presence on X.
Ofcom’s inquiry will determine whether X adequately removes illegal content and implements effective age verification to prevent minors from accessing harmful material.
The scrutiny comes amid broader backlash against Grok’s features, with countries like Malaysia and Indonesia temporarily restricting access due to similar concerns.
According to Lorna Woods, a professor of internet law, the pace of the investigation can vary. She noted that while Ofcom has discretion, prioritizing the case is essential because failing to act could harm children.
As the situation unfolds, a significant conversation about regulating AI tools and protecting individuals online has emerged. It raises questions about how far tech companies should go in monitoring content and how regulations can effectively shield vulnerable users from exploitation.
For more details, you can read Ofcom’s framework on online content safety here.

