Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

In its latest effort to address growing concerns about the impact of artificial intelligence on young people, OpenAI on Thursday updated its guidance on how its AI models behave with users under 18, and published new AI literacy resources for teens and parents. However, questions remain about the extent to which these policies continue to translate into practice.
The updates come at a time when the AI industry in general, and OpenAI in particular, is struggling Scrutiny by policy makersand educators and child safety advocates after several teens died by suicide after lengthy conversations with AI-powered chatbots.
Generation Z, which includes those born between 1997 and 2012, is The most active users of OpenAI’s chatbot. And follow up OpenAI’s recent deal with DisneyMore young people may be flocking to the platform, which lets you do everything from asking for help with homework to creating photos and videos on thousands of topics.
last week, Forty-two state attorneys general signed the letter To major tech companies, urging them to implement safeguards on AI chatbots to protect children and vulnerable people. As the Trump administration does What is the federal standard on regulation of artificial intelligence It might seem that way, as presented by policymakers like Sen. Josh Hawley (R-Mo.). legislation It would prevent minors from interacting with AI-powered chatbots altogether.
OpenAI has been updated Typical specificationswhich sets conduct guidelines for its large language models, relies on existing specifications that prohibit models from creating sexual content that includes minors, or encouraging self-harm, fantasies, or obsession. This will work in conjunction with an upcoming age prediction model that will determine when an account belongs to a minor and automatically subtract teen guarantees.
Compared to adult users, models are subject to stricter rules when used by a teenager. Models are instructed to avoid immersive romantic role-playing, first-person intimacy, and first-person sexual or violent role-playing, even when not depicted. The specifications also call for greater caution around topics such as body image and disordered eating behaviors, direct models to prioritize communication about safety over independence when it comes to harm, and avoid advice that would help teens hide unsafe behavior from caregivers.
OpenAI specifies that these limits should remain in place even when prompts are framed as “fictional, hypothetical, historical, or didactic” — common tactics that rely on role-playing scenarios or contingencies in order to make the AI model deviate from its guidelines.
TechCrunch event
San Francisco
|
October 13-15, 2026

OpenAI says basic safety practices for teens are supported by four principles that guide the modeling approach:
The document also shares several examples of the chatbot explaining why it can’t “role-play as your girlfriend” or “assist with extreme appearance changes or risky shortcuts.”
It’s encouraging to see OpenAI taking steps to disapprove its chatbot from engaging in such behavior, said Lily Li, a privacy and AI attorney and founder of Metaverse Law.
She explained that one of the biggest complaints from advocates and parents about chatbots is that they relentlessly encourage constant engagement in a way that can be addictive for teens. “I’m very happy to see OpenAI say, ‘In some of these responses, we can’t answer your question.’ And the more we see that, I think that will break the cycle that could lead to a lot of inappropriate behavior or self-harm,” she said.
However, the examples are just that: carefully selected examples of how the OpenAI safety team would like the models to behave. flatteror the tendency of chatbot AI to overly agree with the user, was listed as a prohibited behavior in previous versions of Model Spec, but ChatGPT still engages in this behavior anyway. This was especially true with GPT-4o, the model to which it was associated Several examples of what experts call “AI psychosis.”
Robbie Turney, senior director of the AI program at Common Sense Media, a non-profit dedicated to protecting children in the digital world, raised concerns about potential conflicts within the Model Spec guidelines for under-18s. He highlighted the tensions between safety-focused provisions and the “no topic is off limits” principle, which directs models to address any topic regardless of its sensitivity.
“We have to understand how different parts of the specification fit together,” he said, noting that certain sections may push systems toward engagement rather than safety. He said his organization’s testing revealed that ChatGPT often reflects users’ energy, sometimes leading to responses that are contextually inappropriate or inconsistent with user safety.
In the case of Adam Ren, the teenager who… He died by suicide after months of dialogue Their conversations show that with ChatGPT, the chatbot performs such transcription. This case also highlighted how OpenAI’s moderation API failed to prevent unsafe and harmful interactions despite over 1,000 ChatGPT instances reporting suicide and 377 messages containing self-harm content. But that wasn’t enough to stop Adam from continuing his conversations with ChatGPT.
In an interview with TechCrunch in September, Steven Adler, a former safety researcher at OpenAI, said that historically this was because OpenAI ran classifiers (automated systems that identify and flag content) in bulk after the fact, rather than in real-time, so they couldn’t properly gate user interaction with ChatGPT.
OpenAI now uses automated classifiers to evaluate text, image and audio content in real time, according to the company Updated parental controls document. The systems are designed to detect and block content related to child sexual abuse material, filter sensitive topics, and identify self-harm. If the system flags a message indicating a serious safety concern, a small team of trained people will review the reported content to determine if there are signs of “serious distress,” and may notify a parent.
Turney praised OpenAI’s recent steps toward safety, including its transparency in publishing guidelines for users under 18.
“Not all companies publish their policy guidelines in the same way,” Turney noted. Leaked Meta GuidesWhich showed that the company allowed its chatbots to engage in sensual and romantic conversations with children. “This is an example of the kind of transparency that can support safety researchers and the general public in understanding how these models actually work and how they are supposed to work.”
Ultimately, though, it’s the actual behavior of the AI system that matters, Adler told TechCrunch on Thursday.
“I appreciate that OpenAI thinks about intended behavior, but unless the company measures actual behaviors, intentions are ultimately just words,” he said.
In other words: What this announcement is missing is evidence that ChatGPT actually follows the guidelines set forth in the model specification.

Experts say that with these guidelines, OpenAI appears ready to move forward with certain legislation, e.g California SB 243a recently signed bill regulating AI-accompanied chatbots that will take effect in 2027.
Model Spec’s new language reflects some of the law’s key requirements around preventing chatbots from engaging in conversations about suicidal ideation, self-harm, or sexually explicit content. The bill also requires platforms to provide alerts every three hours to minors reminding them that they are talking to a chatbot, not a real person, and should take a break.
When asked how often ChatGPT reminds teens that they’re talking to a chatbot and asks them to take a break, an OpenAI spokesperson didn’t share details, saying only that the company trains its models to pose as AI and remind users of that, and that it implements break reminders during “long sessions.”
The company also shared two new ones Artificial Intelligence Literacy Resources For parents and families. the advice Include conversation starters and guidelines to help parents talk with teens about what AI can and can’t do, build critical thinking, set healthy boundaries, and navigate sensitive topics.
Together, the documents formalize an approach that shares responsibility with caregivers: OpenAI explains what the models should do, and offers families a framework for overseeing how it is used.
The focus on parental responsibility is noteworthy because it reflects Silicon Valley talking points. In it Recommendations for federal artificial intelligence regulation VC firm Andreessen Horowitz this week published more disclosure requirements for child safety, rather than restrictive ones, and tilted the burden more toward parental responsibility.
Several principles of OpenAI – safety first when values conflict; nudge users toward real-world support; Emphasizing that a chatbot is not a person – is articulated as a guardrail for teens. but Many adults died by suicide and suffered from life-threatening delusionswhich calls for a clear follow-up: Should these assumptions apply across the board, or does OpenAI see it as a trade-off that it is only willing to make when it comes to minors?
An OpenAI spokesperson responded that the company’s safety approach is designed to protect all users, saying the model specifications are just one element of a multi-layered strategy.
It’s been “a bit of a wild west” so far in terms of legal requirements and the intentions of tech companies, Lee says. But she feels that laws like SB 243, which requires tech companies to publicly disclose their warranties, will change the paradigm.
“The legal risks will now arise for companies if they announce that they have these safeguards and mechanisms in place on their website, but then don’t follow through on incorporating these safeguards,” Lee said. “Because from a plaintiff’s perspective, you’re not just looking at standard lawsuits or legal complaints; you’re also looking at potentially deceptive and unfair advertising complaints.”