🔍 Summary:
A recent incident involving the AI-powered code editor Cursor highlighted the risks of AI in customer service roles. A developer, noticing that logging into Cursor on one device logged them out of others, contacted support and received a response from “Sam,” a bot, stating that this was due to a new single-device policy. However, this policy did not exist; the AI had fabricated it. This misinformation led to a flurry of complaints and subscription cancellations among Cursor users, as shared on platforms like Hacker News and Reddit.
The situation escalated until a Cursor representative clarified on Reddit that no such policy existed and the issue was due to a backend change aimed at improving security, which inadvertently caused session invalidations. Cursor’s cofounder, Michael Truell, apologized and explained that all AI-generated email responses would now be clearly labeled to avoid such confusion in the future.
This incident underscores the challenges and potential pitfalls of using AI in customer-facing roles, particularly when the AI is not properly supervised or transparently identified as non-human. It also recalls a similar case with Air Canada, where a chatbot’s incorrect information led to a legal ruling that companies are responsible for their AI’s actions. The Cursor episode not only damaged customer trust but also spotlighted the broader implications of AI “hallucinations” or confabulations in business settings.
📌 Source: https://www.wired.com/story/cursor-ai-hallucination-policy-customer-service/