Google has expanded its AI tool, Gemini, to help identify videos created or edited with its technology. Users can ask Gemini if a video was made using Google’s AI and receive a detailed response rather than just a simple “yes” or “no.”
To do this, Gemini checks the video’s visuals and audio for a special watermark called SynthID. This feature was first introduced for images in November and is now available for videos as well.
One interesting aspect of this technology is that while some watermarks can be easily removed, Google’s watermark aims to be almost invisible. It will be important to see how effective it is in preventing others from altering or erasing it.
Gemini can verify videos that are up to 100 MB in size and 90 seconds long. This capability is offered in all languages and regions where the app is available.
Recent studies suggest that the threat of deepfakes is growing. A report from the non-profit organization DeepTrust Alliance found that over 70% of people are concerned about misinformation in videos. This highlights the need for reliable verification tools like Gemini, especially as social media platforms struggle to keep up with identifying AI-generated content.
Experts in tech and media believe that tools like Gemini could play a key role in maintaining trust online. As AI technology becomes more advanced, proper tagging and verification are essential. The potential for misuse underscores the importance of developing robust systems that can ensure content authenticity.
For more insights on AI and content verification, see the DeepTrust Alliance report.
Source link
AI,Google,News,Tech

