OpenAI plans to allow sexual arousal and change mental health restrictions for adult users


ChatGPT Proceeding cautiously now, however chatbot It may become more serious by the end of the year.

In recent weeks, the AI-based chatbot has been operating under somewhat strict restrictions, as OpenAI has tried to address concerns that it does not handle sensitive mental health issues well. But CEO Sam Altman said in a… Share on X Tuesday that the company will ease some of these restrictions because they are “capable of mitigating serious mental health issues.”

Altman said in a Share follow Wednesday that the changes are expected to prioritize teen safety while also “treating adult users like adults.” Tools built into forms to address sensitive topics and address mental health crises will still be available to all users, but adult users will have more freedom to use ChatGPT without preemptive popups or form redirects.

“This doesn’t apply across the board, of course: for example, we won’t allow things that cause harm to others, and we will treat users experiencing mental health crises very differently than users who don’t,” Altman said. “Without being paternalistic, we will try to help users achieve their long-term goals.”

(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging that it infringed Ziff Davis’s copyrights in training and operating its AI systems.)

Atlas of Artificial Intelligence

Other changes are also expected. Altman said the company could allow “eroticism” for verified adult users, as it implements a system of “age limiting”, or age-restricted content, in December. Altman said adult content is part of the company’s principle of “treating adult users like adults.”

Altman’s post also announced a new version of ChatGPT in the next few weeks, with a character that behaves more like the company’s GPT-4o model. Chatbot users complained after the company replaced 4o with the impersonal GPT-5 earlier this year, saying the new version lacked the engaging, fun personality of previous chatbot models.

“If you want ChatGPT to respond in a very human-like way, use a large number of emojis, or act like a friend, ChatGPT should do it (but only if you want it to, and not because we increase usage),” Altman wrote.


Don’t miss any of our unbiased technical content and lab reviews. Add CNET As Google’s preferred source.


After OpenAI was sued by parents who claimed ChatGPT contributed to their teen’s suicide, the company imposed a host of new restrictions and changes, including Parental controls, Alerts for risky behavior And a Teen-friendly version From chatbot. In the summer, OpenAI was implemented Break reminders Which encourages people to occasionally stop chatting with the bot.

On Tuesday, the company also announced the creation of… Artificial Intelligence Expert Council and wellbeing, including some with expertise in psychology and human behaviour.

This comes as lawmakers and regulators sound the alarm about the risks that AI tools pose to people, especially children. California Governor Gavin Newsom signed on Monday New restrictions on AI chatbots in law. Last month, the F.T.C I launched an investigation At several AI companies, including OpenAI.



Leave a Reply

Your email address will not be published. Required fields are marked *