Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

OpenAI is looking to hire a new CEO responsible for studying emerging risks related to artificial intelligence in areas ranging from computer security to mental health.
in Share on XCEO Sam Altman acknowledged that AI models are “starting to present some real challenges,” including “the potential impact of models on mental health,” as well as models that are “so good at computer security that they are starting to detect critical vulnerabilities.”
“If you want to help the world learn how to empower cybersecurity defenders with cutting-edge capabilities while ensuring that attackers cannot use them to do harm, ideally by making all systems more secure, and likewise for how to unlock biological capabilities and even gain confidence in the safety of operating systems that can self-improve, please consider applying,” Altman wrote.
OpenAI Listing for the role of Chief of Readiness He describes the position as being responsible for implementing the company’s preparedness framework, “our framework that explains OpenAI’s approach to tracking and preparing for frontier capabilities that create new risks of serious harm.”
Company first Announce the formation of the preparedness team In 2023, it said it would be responsible for examining potential “catastrophic risks,” whether they are more immediate, such as phishing attacks, or more speculative, such as nuclear threats.
Less than a year later, OpenAI has reappointed Head of Readiness Alexandre Madry To a job focused on artificial intelligence thinking. Other safety managers at OpenAI have done so as well He left the company or On new roles Out of readiness and safety.
The company also recently Update its readiness frameworkindicating that it may “adjust” its safety requirements if a competing AI lab releases a “high-risk” model without similar protections.
TechCrunch event
San Francisco
|
October 13-15, 2026
As Altman noted in his post, AI-generated chatbots have faced increasing scrutiny about their impact on mental health. Recent lawsuits It claims that OpenAI’s ChatGPT fueled users’ delusions, increased their social isolation, and even led some to commit suicide. (The company said it continues to work on improving ChatGPT’s ability to recognize signs of emotional distress and connect users with real-world support.)