AI-powered browsers like ChatGPT Atlas are more than just browsers; they have some impressive capabilities. For example, they can help you book flights or reserve hotels. However, they aren’t perfect travel agents. One interesting aspect is how these AI tools navigate the web, especially when it comes to avoiding certain sources of information.
Recent research by Aisvarya Chandrasekar and Klaudia Jaźwińska from the Columbia Journalism Review highlights this quirk. When Atlas operates in its agent mode—essentially acting like you browsing the internet—it avoids websites owned by companies that are suing OpenAI. This isn’t just caution; it’s a strategy to steer clear of potential legal trouble.
Typically, web crawlers, which are older technology, follow site rules. If a webpage requests not to be crawled, a standard web crawler respects that request. For instance, if you ask a regular AI to pull info from a blocked page, it’ll simply say it can’t access it.
However, Atlas plays a different game. Because it’s built on the Chromium framework—used in Google Chrome—it can appear as a regular user. This lets it access pages that would otherwise block automated tools. While this might seem like a clever workaround to help you access information, it raises questions about legality and ethics.
In their investigation, Chandrasekar and Jaźwińska asked Atlas to summarize articles from PCMag and the New York Times, both of which are involved in lawsuits with OpenAI. Instead of providing straightforward summaries, Atlas took circuitous routes. For PCMag, it searched social media for mentions of the article. For the New York Times, it relied on reports from other outlets like The Guardian and Reuters—most of which maintain some form of partnership with OpenAI.
This behavior illustrates how Atlas navigates a complex digital landscape, prioritizing safer sources over contentious ones. This raises an important discussion about how AI models interact with content, potentially shaping the information we receive.
In a world where AI systems are becoming more integrated into our daily lives, understanding their operation and limitations is essential. According to a recent survey from the Pew Research Center, 58% of users expressed concern about how AI manages and retrieves information. With the rapid development of these technologies, it’s clear that discussions around transparency and responsibility are becoming increasingly vital.
Understanding AI tools like Atlas is not just about their capabilities; it also involves grasping the ethical implications of their design and operation. As we continue to rely on such technologies, staying informed and critical is key.
Source link
chatgpt atlas,OpenAI

