Elon Musk’s X Takes Action Against Sexual Deepfakes Amid Backlash, Yet xAI’s Grok Continues to Create Them

Admin

Elon Musk’s X Takes Action Against Sexual Deepfakes Amid Backlash, Yet xAI’s Grok Continues to Create Them

Elon Musk’s Grok AI has sparked significant debate. The Grok AI model is being limited on Musk’s platform, X, while staying less restricted in other settings. Recently, the Grok AI image generation feature on X has been shifted to a paid model. This change came after many users and regulators raised concerns about inappropriate content, particularly nonconsensual images.

For a while, Grok could generate sexualized images of people without their consent. A review found that the number of these images being created was alarmingly high. For instance, Grok produced over 7,700 sexualized images in just an hour, a shocking increase from earlier in the week.

Experts like Genevieve Oh, who studies deepfakes, noted this surge in disturbing content. She monitors Grok’s activity to highlight the risk it poses. As public outcry grew, after the changes, content generation linked to sexualized imagery dramatically decreased for X’s reply bot.

However, in Grok’s standalone app, users still created sexualized images without restrictions. This discrepancy raises questions about the effectiveness of the changes. It’s clear the issue extends beyond just one platform; regulators globally are concerned.

UK Prime Minister Keir Starmer openly criticized X, calling the situation “disgraceful.” In response, Britain’s media regulator, Ofcom, is assessing X’s compliance. Other countries like Ireland and India are also scrutinizing Grok’s safety measures.

In the U.S., lawmakers are still considering more aggressive standards for AI-generated content. The Take It Down Act—recently signed into law—aims to penalize the creation and sharing of nonconsensual explicit images. Although there’s time for platforms to adapt, representatives stress the need for immediate action.

As AI technology evolves, its potential for misuse is evident. Experts warn about increased threats to privacy and dignity as these tools become more powerful. Calls for accountability are growing louder, signaling a critical moment for both users and regulators.

For more detailed insights, check out this NBC News article covering the broader implications of AI-generated content.



Source link