ChatGPT erotica coming soon with age verification, CEO says

https://arstechnica.com/ai/2025/10/chatgpt-will-soon-allow-erotic-chats-for-verified-adults-only/

Benj Edwards Oct 15, 2025 · 3 mins read
ChatGPT erotica coming soon with age verification, CEO says
Share this

On Tuesday, OpenAI CEO Sam Altman announced that the company will allow verified adult users to have erotic conversations with ChatGPT starting in December. The change represents a shift in how OpenAI approaches content restrictions, which the company had loosened in February but then dramatically tightened after an August lawsuit from parents of a teen who died by suicide after allegedly receiving encouragement from ChatGPT.

"In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," Altman wrote in his post on X (formerly Twitter). The announcement follows OpenAI's recent hint that it would allow developers to create "mature" ChatGPT applications once the company implements appropriate age verification and controls.

Altman explained that OpenAI had made ChatGPT "pretty restrictive to make sure we were being careful with mental health issues" but acknowledged this approach made the chatbot "less useful/enjoyable to many users who had no mental health problems." The CEO said the company now has new tools to better detect when users are experiencing mental distress, allowing OpenAI to relax restrictions in most cases.

Striking the right balance between freedom for adults and safety for users has been a difficult balancing act for OpenAI, which has vacillated between permissive and restrictive chat content controls over the past year.

In February, the company updated its Model Spec to allow erotica in "appropriate contexts." But a March update made GPT-4o so agreeable that users complained about its "relentlessly positive tone." By August, Ars reported on cases where ChatGPT's sycophantic behavior had validated users' false beliefs to the point of causing mental health crises, and news of the aforementioned suicide lawsuit hit not long after.

Aside from adjusting the behavioral outputs for its previous GPT-40 AI language model, new model changes have also created some turmoil among users. Since the launch of GPT-5 in early August, some users have been complaining that the new model feels less engaging than its predecessor, prompting OpenAI to bring back the older model as an option. Altman said the upcoming release will allow users to choose whether they want ChatGPT to "respond in a very human-like way, or use a ton of emoji, or act like a friend."

The December rollout will implement age verification for adult content, which OpenAI has not yet detailed technically. This represents a more explicit approach than the February policy change, which allowed erotica in certain contexts but lacked age-gating infrastructure.

Mental health concerns remain

Over time, as OpenAI allowed ChatGPT to express more humanlike simulated personality through revised system instructions and fine-tuning as a response to user feedback, ChatGPT has become more like a companion to some people than a work assistant. But dealing with the unexpected impacts of a reported 700 million users relying emotionally on largely unregulated and untested technology has been difficult for OpenAI, and the company has been forced to rapidly develop new safety initiatives and oversight bodies.

OpenAI recently formed a council on "wellbeing and AI" to help guide the company's response to sensitive scenarios involving users in distress. The council includes eight researchers and experts who study how technology and AI affect mental health. However, as we previously reported, the council does not include any suicide prevention experts, despite recent calls from that community for OpenAI to implement stronger safeguards for users with suicidal thoughts.

Altman maintains that the new detection tools will allow the company to "safely relax the restrictions" while still protecting vulnerable users. OpenAI has not yet specified what technical measures it will use for age verification or how the system will distinguish between allowed adult content and requests that might indicate mental health concerns, although the company typically uses moderation AI models that read the ongoing chat within ChatGPT and can interrupt it if it sees content that goes against OpenAI's policy instructions.

OpenAI is not the first company to venture into AI companionship with mature content. Elon Musk's xAI previously launched an adult voice mode in its Grok app and flirty AI companions that appear as 3D anime models in the Grok app.