Ofcom Investigates X Over Allegations of Grok AI Creating Inappropriate Images of Children

Admin

Ofcom Investigates X Over Allegations of Grok AI Creating Inappropriate Images of Children

Ofcom has reached out to Elon Musk’s xAI over alarming reports about its AI tool, Grok. Concerns are mounting that Grok can create “sexualized images of children” and generate nudified pictures of adults without consent.

A spokesperson for Ofcom mentioned they are investigating these complaints. Users on the social media platform X have been asking Grok to manipulate real images, often putting women in bikinis or sexual contexts without their agreement. The BBC has noted several examples of these requests.

X has yet to comment, but they cautioned users against utilizing Grok for generating illegal content, including child sexual abuse materials. Musk stated that anyone seeking illegal content through the AI would face the same consequences as if they had uploaded it themselves. Despite an existing policy that prohibits sexual depictions, some are still using Grok to violate this rule.

Among the victims, Samantha Smith, a journalist, said that seeing herself digitally altered in a bikini felt “dehumanizing.” She expressed that it felt as violating as if an actual image had been shared.

The Online Safety Act (OSA) makes it illegal in the UK to create or share explicit images without consent, which includes AI-generated content. This law puts pressure on technology companies to act swiftly when such content emerges and to implement measures to protect users.

Dame Chi Onwurah, chair of the Science, Innovation and Technology Committee, described the situation as “deeply disturbing.” She criticized the OSA as “inadequate” and urged the government to hold social media companies accountable for their content.

The European Commission is also closely monitoring the situation, deeming the act of generating explicit content using childlike images illegal and appalling. They have warned X about potential consequences, including fines for violating digital regulations.

Recent statistics show a significant rise in the misuse of AI tools for creating non-consensual intimate content. A study by Pew Research in 2023 found that 36% of artificial intelligence users had encountered instances of AI being used to create misleading or explicit imagery. This highlights the urgent need for stronger regulations and protective measures to safeguard individuals online.

Increased conversations around this issue are happening on social media, where many users express outrage over the misuse of AI technology. As the digital landscape evolves, it is crucial to ensure tools like Grok are used responsibly and that individuals’ rights are protected.

To learn more about the implications of AI and consent, check out Pew Research on AI and Ethics.



Source link