OpenAI has shelved plans to launch its previously announced erotic chatbot, according to a report, apparently backing away from a controversial expansion of ChatGPT that would have allowed adult users to generate sexual content.
The reversal, first reported by the Financial Times on Thursday, follows internal concerns about the societal impact of sexualized artificial intelligence. In January, members of OpenAI’s Expert Council on Well-Being and AI reportedly warned that erotic chat features could foster unhealthy emotional dependency among users, and risk turning the chatbot into what one member described as a “sexy suicide coach.”
OpenAI declined Decrypt’s request to comment on the status of the erotic mode, and the firm has yet to post about its fate.
The decision to cancel what was reportedly to be called “Citron mode” comes two days after OpenAI canceled its Sora text-to-video model, as the company moves to focus development on a unified AI platform rather than a collection of specialized tools.
The move marks a departure from the direction outlined by CEO Sam Altman as recently as October. At the time, Altman said OpenAI planned to allow verified adults to access romantic and erotic content once a robust age-verification system was in place.
Altman described the idea as part of a broader effort to treat adult users with greater autonomy while maintaining safeguards for minors. By December, however, the timeline was pushed to 2026 as the company continued to refine its age-estimation technology.
While OpenAI may be getting out of the adult chatbot business before it ever really got into it, AI models do not necessarily need an “erotic mode” for users to form connections with them.
When OpenAI deprecated GPT-4o last summer, users flooded social media with calls to restore the model after saying they had formed personal and emotional relationships with the chatbot, reflecting a broader debate around erotic chatbots and how people interact with AI.
In June, research published by researchers at Waseda University in Tokyo said 75% of participants reported turning to AI systems for emotional advice.
At the same time, AI developers are facing growing scrutiny as lawsuits test whether conversational AI systems are responsible for reinforcing delusional beliefs or harmful behavior among vulnerable users.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。