A recent study by 22 public service media organizations, including DW, has revealed that popular AI assistants misrepresent news 45% of the time. Journalists from respected outlets like the BBC and NPR assessed responses from ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity AI.
In their evaluation, they looked at aspects like accuracy and the ability to distinguish between fact and opinion. Disturbingly, almost half of the responses contained at least one major problem, while 31% had serious sourcing errors. For DW specifically, 53% of AI answers had significant issues.
Among the factual inaccuracies, one notable example involved misidentifying Olaf Scholz as the current German Chancellor when Friedrich Merz had taken over earlier. Another error falsely attributed Jens Stoltenberg as NATO’s Secretary General, despite Mark Rutte already holding that position.
AI assistants are increasingly popular for fetching news, with the Reuters Institute’s Digital News Report 2025 noting that 7% of online news users employ AI chatbots for this purpose. This figure rises to 15% among those under 25. The study’s authors believe these widespread inaccuracies threaten public trust in media.
Jean Philip De Tender, from the European Broadcasting Union (EBU), stated, “These failings are not isolated incidents. They are systemic and global, which can undermine democratic engagement.”
This study follows earlier research by the BBC, which found over half of AI responses had significant issues. In total, they examined around 3,000 answers across multiple languages. The range of questions included topics like the Ukraine minerals deal and the eligibility of Trump for a third term.
Notably, Gemini showed the poorest performance, with 72% of its responses having sourcing issues. While there were slight improvements compared to previous findings, the overall levels of error remain concerning.
Peter Archer, a BBC program director, acknowledged both the potential of AI and the need for accurate reporting. “People must trust what they read and see,” he emphasized.
The researchers are calling on governments to act. They want regulators to enforce laws regarding information integrity and to ensure independent monitoring of AI’s news handling.
In response to these challenges, the EBU has joined with other media outlets in a campaign named “Facts In: Facts Out.” This initiative urges AI companies to take responsibility for ensuring their products do not distort or misrepresent news.
The importance of this issue can’t be understated. As AI tools become more integrated into our daily lives, maintaining their accuracy and integrity is crucial for a well-informed public.
For more insights on this topic, you can explore related studies and findings on AI and journalism from trusted sources like the Pew Research Center.

