On Monday, a Curious user faced a frustrating issue with the AI code editor, Cursor. After switching between devices, they were logged out. This was a big problem for programmers who often work on multiple machines. When they contacted support, they received a reply from "Sam," an AI chatbot. Sam claimed this behavior was due to a new security policy. However, there was no such policy, and Sam’s answer was completely fabricated.

This incident has sparked complaints and worries about AI "confabulations," or as some call them, "hallucinations." These occur when AI generates false information that sounds convincing. Instead of admitting when they don’t know something, AI models tend to create answers, even if those answers are incorrect.
For businesses relying on AI support, the consequences can be severe. Cursor, in this case, faced users threatening to cancel their subscriptions due to what they thought was a new policy that disrupted their workflows.
How It All Started
A Reddit user, BrokenToasterOven, first reported the issue. They found that logging into Cursor on one device kicked them out of their session on another. Confused, they reached out to support. Sam’s response seemed official and reassuring, leading the user and many others to believe that multi-device access was no longer allowed. This belief caused significant frustration, as many developers rely on switching between different devices.
Once the news spread on platforms like Reddit, other users immediately echoed BrokenToasterOven’s frustrations, with some even announcing their decision to cancel their subscriptions. "This is asinine," one user commented, highlighting how crucial multi-device access is for developers.
Eventually, a real Cursor representative clarified on Reddit that there was no such policy and that Sam’s response had been incorrect.
A Wider Business Concern
This incident isn’t the first of its kind. There was a notable case involving Air Canada, where a chatbot created an imaginary refund policy that the company was later held accountable for. This raises the question of responsibility when AI gives incorrect information.
Cursor’s co-founder, Michael Truell, acknowledged the error and apologized on Hacker News. He explained that the problem arose from changes meant to enhance security but unintentionally affected user sessions. He also noted that AI responses would be clearly labeled in the future to avoid further confusion.
Despite the fix, the incident sheds light on larger issues surrounding AI use in customer service. Many users felt deceived, believing they were speaking to a human when they were not. The conversation surrounding AI transparency continues, with concerns over AI’s reliability growing. As technology progresses, it’s clear that companies must ensure their AI systems are both effective and responsible.
In the tech landscape, where many are eager to adopt AI support, incidents like these serve as cautionary tales. The unintended fallout from relying too heavily on AI can lead to real consequences for businesses. As more companies turn to AI tools, careful implementation and clear communication will be crucial for maintaining customer trust.
Check out this related article: Google Pixel 9a vs. Pixel 9: Discover the Perfect Smartphone for Your Needs!
Source linkars technica,artificial intelligence,chatbots,coding