AI Customer Chatbot constitutes a company policy – and created a mess
On a Monday developer using the popular AI-supply Code Cursor I noticed something strange: the switching between machines immediately came out, interrupting a common workflow for programmers using multiple devices. When the user contacted the Cursor support, an agent named “Sam” told them that behavior was expected in a new policy. But there was no such policy and it was a bot. AI has increased policy by igniting a wave of complaints and cancellation threats documented Hack and RedditS
This marks the latest AI case confabulations (Also called “hallucinations”) causing potential business damage. Confabulations are a type of “creative gap filling” in which AI models come up with a plausible sound but false information. Instead of recognizing uncertainty, AI models often give priority to creating plausible, confident answers, even when it means producing information from scratch.
For companies with these systems in roles targeting customers without human supervision, the consequences can be immediate and expensive: disappointed customers, damaged confidence and in the case of Cursor, potentially canceled subscriptions.
How is it unfolding
The incident started when a user noted This, while changing between a desktop, a laptop and a remote development box, the cursor sessions were unexpectedly terminated.
“Entering Cursor on a machine immediately invalidates the session of any other machine,” writes Brokentoasteroven in a message that was deleted by R/Cursor moderators. “This is a significant UX regression.”
Confused and disappointed, the user wrote an email to Cursor support and quickly received a response from Sam: “Cursor is designed to work with a single subscription device as a major security feature,” read the email answer. The answer sounded final and officially and the user did not suspect that he was not himself.
Following Reddit’s initial publication, users took the publication as an official confirmation of the actual change in policy – the one that violates habits, essential for the daily procedures of many programmers. “Many devices work flows are betting on the DEVS table,” one user wrote.
Shortly thereafter, several users publicly announced their anuction to a Reddit subscription, stating the non -existent policy as their reason. “I just just canceled my submarine,” wrote the original Reddit poster, adding that their workplace now “cleans it completely.” Others joined: “Yes, I also cancel, it’s Asinine.” Shortly thereafter, the moderators locked the thread Reddit and pulled out the original post.
“Hey! We have no such policy” wrote Cursor representative in response to Reddit three hours later. “Of course, you are free to use a cursor of multiple machines. Unfortunately, this is an incorrect answer from the front AI support bot.”
AI Confabulations as a business risk
The cursor debate recalls a Episode Since February 2024, when Air Canada has been ordered to honor a recovery policy invented by its own chatbot. In this incident, Jake Mofhate contacted Air Canada’s support after his grandmother died, and the AI agent of the airlines incorrectly told him that he could reserve a regular flight and apply for a back -date fear. When Air Canada later refused its request for recovery, the company claims that “a chatbot is a separate legal entity that is responsible for its own actions.” A Canadian tribunal has rejected this protection, deciding that companies are responsible for the information provided by their AI tools.
Instead of challenging the responsibility, as Air Canada did, Cursor admitted the mistake and took steps to make corrections. Co -founder of the cursor Michael Robbly Excused by hacking news To confuse the non -existent policy, explaining that the user has been restored and the problem is the result of changing the backend to improve the security of the session that has inadvertently created problems with the disability of the session for some users.
“All AI answers used for email support are now clearly marked as such,” he added. “We use the answers assisted by AI as the first email support filter.”
However, the incident raised lengthy questions about disclosure among consumers, as many people who interacted with Sam apparently believed that he was human. “Llms, pretending to people (you baptized it alone!) And not labeled as such, is clearly intended to be deceptive”, one user Written in Hacker NewsS
While Cursor fixes the technical error, the episode shows the risks of implementing AI models in roles directed at the client, without appropriate precautions and transparency. For a company that sells AI’s performance tools to developers, having an AI support system inventing a policy that alienates its main users is a particularly uncomfortable self -inflicted wound.
“There is a certain amount of irony that people are trying to say a lot to say that hallucinations are no longer a big problem”, one user Written in Hacker News“And then a company that would take advantage of this story was hurt directly by it.”
This story originally appeared on Ars TechnicaS