Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Thursday OpenAI Announce A new feature, called Trusted Contact, is designed to alert a trusted third party if self-harm is mentioned during a conversation. This feature allows an adult ChatGPT user to designate someone else as a trusted contact within their account, such as a friend or family member. In cases where the conversation might turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also automatically sends an alert to the contact, encouraging them to check in with the user.
I have encountered OpenAI A wave of lawsuits From the families of people who committed suicide after talking to his chatbot. In a number of cases, families say ChatGPT Encourage loved ones to kill themselves – or even Help them plan it.
OpenAI currently uses a combination of automation and human review to handle potentially malicious incidents. Certain conversational triggers alert the company’s system to suicidal thoughts, which then relays the information to the human safety team. The company claims that every time you receive this type of notification, the incident is reviewed by someone. “We strive to review these safety notices in less than one hour,” the company says.
If the internal OpenAI team determines that the situation poses a serious safety risk, ChatGPT will send an alert to the trusted contact – either via email, text message, or in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. The company says it does not include detailed information about what was discussed, as a way to protect user privacy.

The Trusted Connect feature follows company warranties It was introduced last September This gave parents the ability to exercise some control over their teens’ accounts, including receiving money Safety notices Designed to alert parents if the OpenAI system believes their child faces a “serious safety risk.” For some time now, ChatGPT has also included automatic alerts to seek professional health services, should the conversation veer towards the topic of self-harm.
Most importantly, Trust Contact is optional, and even if protection is activated on a specific account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, which presents a similar limitation.
“Reliable communication is part of OpenAI’s broader effort to build AI systems that… Helping people in difficult moments“, the company wrote in Latest ad. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people experience distress.”
TechCrunch event
San Francisco, California
|
October 13-15, 2026
When you make a purchase through the links in our articles, We may earn a small commission. This does not affect our editorial independence.