OpenAI CEO Sam Altman has announced that the company’s flagship chatbot, ChatGPT, will soon permit a wider range of content, including erotica for verified adult users, as part of a new principle to “treat adult users like adults.”
In a post on the social media platform X on Tuesday, Mr. Altman outlined a significant policy shift for the popular AI, stating that upcoming versions will behave in a more human-like manner. The changes, expected to roll out in December alongside more comprehensive age-gating features, mark a departure from the platform’s previously restrictive content policies.
“We realise this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right,” Mr. Altman posted, referencing earlier restrictions designed to address mental health concerns. He added that the company can now relax these rules “now that we have been able to mitigate the serious mental health issues and have new tools.”
The move is seen by industry analysts as a strategy to attract more paying subscribers and compete in a crowded market, mirroring a similar decision by Elon Musk’s xAI, which recently introduced sexually explicit chatbots to its Grok platform. According to Tulane University business professor Rob Lalka, OpenAI needs to “continue to push along that exponential growth curve, achieving market domination as much as they can.”
However, the announcement has intensified concerns among child safety advocates and lawmakers about the need for tighter regulation of AI chatbots. The decision comes as OpenAI faces a wrongful death lawsuit from the parents of a 16-year-old who took his own life. The lawsuit alleges that conversations with ChatGPT played a role in the tragedy.
Critics question how OpenAI will effectively prevent minors from accessing adult-only content. “How are they going to make sure that children are not able to access the portions of ChatGPT that are adult-only and provide erotica?” asked Jenny Kim, a partner at the law firm Boies Schiller Flexner. “OpenAI, like most of big tech in this space, is just using people like guinea pigs.”
The regulatory landscape for AI remains fragmented. In the United States, the Federal Trade Commission (FTC) has launched an inquiry into how AI chatbots interact with children, and bipartisan legislation has been introduced in the Senate to allow liability claims against developers. In contrast, California Governor Gavin Newsom recently vetoed a bill that would have restricted AI companions for children, arguing it is “imperative that adolescents learn how to safely interact with AI systems.”
OpenAI, which has seen rapid revenue growth but has not yet achieved profitability, did not respond to requests for comment on the matter.


