Are Deepfakes of Deceased Celebrities Changing Our Perception of History?

Admin

Are Deepfakes of Deceased Celebrities Changing Our Perception of History?

OpenAI’s latest app, Sora, is making waves. It lets users create AI-generated videos featuring themselves, friends, and even celebrities. The idea is to be a playground for creativity—everyone can build off others’ ideas.

When Sora launched, it included important features about consent. Users can control how their likeness is used in others’ videos. However, with over a million downloads in no time, experts have raised concerns. They worry about the app flooding the internet with deepfakes and misinformation, especially involving deceased individuals who can’t give consent.

Imagine a video showing Marilyn Monroe teaching kids or Nat King Cole ice skating. While these are playful ideas, they also spark serious ethical questions. Adam Streisand, an attorney who has represented various celebrity estates, highlights a significant issue: the challenge isn’t just about laws protecting these figures; it’s also about whether legal systems can appropriately handle these new AI capabilities.

Sora’s ability to conjure videos of historical figures in questionable contexts has already upset some families. For instance, Zelda Williams, daughter of the late Robin Williams, expressed her distress over such portrayals. Similarly, Bernice King spoke up about her father, Martin Luther King Jr., asking for respect regarding how his likeness is used.

OpenAI acknowledges the fine line between creative expression and respect for legacies. A spokesperson mentioned that while free speech is vital, families should control how their loved ones’ images are used. To address some worries, Sora’s generated videos come equipped with invisible signals and watermarks to indicate they are AI-created. However, experts warn that these can be easily removed.

As deepfakes become increasingly realistic, academic voices like Liam Mayes from Rice University flag two main societal concerns. First, trusting individuals might fall prey to scams. Second, the authenticity of genuine content could be undermined, altering public trust in media.

Mark Roesler, who manages legacy rights for many deceased personalities, has seen how technology continually raises concerns about protecting legacies. He states that while innovations can help keep important figures alive in popular culture, they also pose risks of misuse.

On the detection front, companies are developing tools using AI to recognize AI-generated content. For instance, Reality Defender uses powerful AI systems to detect deepfakes. As deepfake technology evolves, tools for identification must also improve. McAfee, for example, has software that listens for “AI fingerprints” in videos.

Moreover, the effects of deepfake technology aren’t limited to English. The technology is more advanced in widely spoken languages than in others, indicating a digital divide that affects accessibility and detection across the globe.

Concerns around deepfakes are not new. Less than a year ago, some anticipated a wave of deepfake-related misinformation during the 2024 elections, though this didn’t manifest as expected. Despite this, the landscape of AI-generated content remains a pressing issue as newer and more realistic tools emerge, prompting public discussions about trust and integrity in media.

As we navigate this new terrain of AI technology, the importance of ethical considerations and robust detection mechanisms remains paramount. The balance between creative freedom and respect for individual rights is a contemporary challenge we are still learning to manage.



Source link