Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

OpenAI said Tuesday that it will release a set of prompts that developers can use to make their apps safer for teens. The Artificial Intelligence Laboratory said that a group Teen safety policies It can be used with the open weight safety model known as gpt-oss-safeguard.
Instead of working from scratch to figure out how to make AI safer for teens, developers can use these prompts to fortify what they’re building. It addresses issues such as graphic violence and sexual content, harmful physical ideals and behaviours, dangerous activities and challenges, romantic or violent role-playing, and age-restricted goods and services.
These safety policies are modeled as claims, making them easily compatible with other models besides gpt-oss-safeguard, although they are probably more effective within the OpenAI ecosystem.
To write these claims, OpenAI said it worked with AI safety watchdogs Common Sense Media and everyone.ai.
“These agile policies help establish a meaningful security floor across the ecosystem, and because they are released as open source, they can be adapted and improved over time,” Robbie Turney, head of AI and digital assessments at Common Sense Media, said in a statement.
OpenAI noted in its report Blog Developers, including experienced teams, often struggle to translate safety objectives into precise operational rules.
“This can lead to security vulnerabilities, inconsistent enforcement, or overly broad filtering,” the company wrote. “Clear, well-scoped policies are a critical foundation for effective safety systems.”
TechCrunch event
San Francisco, California
|
October 13-15, 2026
OpenAI acknowledges that these policies are not a solution to the complex challenges facing AI safety. But it builds on its previous efforts, including product-level safeguards like parental controls and age prediction. Last year, OpenAI Updated guidelines For its large linguistic models – known as Typical specifications – To address how its AI models behave with users under the age of 18.
However, OpenAI doesn’t have a great track record in itself. facing the company Several lawsuits Submitted by families of people who died by suicide after excessive use of ChatGPT. These dangerous relationships often form after a user bypasses a chatbot’s security measures, and no form’s guardrails can be completely breached. However, these policies at least represent a step forward, especially since they could help independent developers.