Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

OpenAI launched an optional security feature this week called Trusted Contact, which allows adult ChatGPT users to nominate a friend or family member to be notified if there are discussions about self-harm or suicide on the chatbot. The company announced.
OpenAI said that if ChatGPT’s automated monitoring system detects that a user “may have discussed self-harm in a way that indicates a serious safety concern,” a small team will review the situation and notify the contact if it warrants intervention. The designated safety contact will receive an invitation in advance to explain the role and can decline it.
(Disclosure: Ziff Davis, the parent company of CNET, in 2025 filed a lawsuit against OpenAI, alleging that it infringed Ziff Davis’s copyrights in training and operating its AI systems.)
This announcement comes at a time when AI-powered chatbots have been implicated in numerous incidents of self-harm and deaths, leading to numerous lawsuits accusing developers of failing to prevent such outcomes. In one high-profile case in California, the parents of a 16-year-old said… ChatGPT acted as their son’s “suicide coach.”“, alleging that the teen discussed suicide methods using the AI model on several occasions and that the chatbot offered to help him write a suicide note.
In a separate case, the family of a recent Texas A&M University graduate File a lawsuit against OpenAIclaiming that the AI chatbot encouraged their son to commit suicide after he developed a deep and disturbing relationship with the chatbot.
Since large language models mimic human speech through pattern recognition, many users form emotional attachments to them, treating them as confidants or even romantic partners. LLMs are also designed to follow human lead and maintain engagement, which may exacerbate mental health risks, especially for vulnerable users.
OpenAI said last October that its research found that more than 1 million ChatGPT consumers weekly Sending messages containing “explicit indications of possible suicidal planning or intent.” Numerous studies I’ve found that popular chat programs like ChatGPT, Claude and twin He may give harmful or unhelpful advice to those in crisis.
The new personalized calling feature comes after the launch of OpenAI Parental controls Which enables parents and guardians to get alerts if there are signs of danger to their teenage children.
According to OpenAI, if ChatGPT’s automated monitoring system detects that a user is discussing self-harm in a way that could pose a serious safety issue, ChatGPT will notify the user that they may notify their trusted contact. The app will encourage the user to connect with the trusted contact and provide conversation starters.
At that point, a “small team of specially trained people” will review the situation. If it is determined to be a serious security situation, ChatGPT will notify the contact via email, text, or in-app notifications. OpenAI did not specify how many people were on the review team and whether it included trained medical professionals. The company said the team has the capacity to meet the high demand for potential interventions.
It’s not clear what key terms indicate dangerous conversations or how OpenAI’s review team interprets a crisis as warranting a contact notification. some Online commenters wonder if that is the case The new feature is a way for OpenAI to avoid liability and shift responsibility to users’ assigned personal contacts. Others point out that it may make a bad situation worse if a “trusted contact” is the source of danger or abuse.
There are also privacy and enforcement concerns, particularly regarding sharing sensitive information related to mental health. According to OpenAI, a message to the trusted contact will only give the general reason for concern and will not share chat or text details. OpenAI provides guidance About how trusted contacts can respond to a warning notice, including asking direct questions if they are concerned that the other person is thinking about suicide or self-harm and how to get help.
Notifications sent to a trusted contact do not contain safety details.
OpenAI Gives an example What a message to the trusted contact might look like:
We recently discovered a conversation from (Name) where they discussed suicide in a way that may indicate serious safety concerns. Since you’re listed as their trusted contact, we share this so you can reach out to them.
OpenAI said that all notifications will be reviewed by the human team within one hour before being sent and that notifications “may not always reflect exactly what someone is experiencing.”
To add a trusted contact, ChatGPT users can go to Settings > Trusted Contact And add an adult (18 or older). You can only have one trusted contact. This person will then receive an invitation from ChatGPT and must accept it within one week. If he doesn’t respond or refuses to become the contact, you can select a different contact.
ChatGPT customers can change or remove the trusted contact in their application settings. People can also unsubscribe To be a reliable contact at any time.
Although adding a trusted contact is optional, ChatGPT users who have not already signed up may see prompts to sign up if they ask or discuss topics related to severe emotional distress or self-harm more than once over a period of time, according to OpenAI. If a chatbot’s automated system identifies patterns across conversations, it may suggest to the user that they would benefit from choosing a trusted contact.
Feature details It’s explained on the OpenAI page. OpenAI told CNET that the feature is being rolled out to all adult customers worldwide and will be available to everyone within a few weeks.
If you feel that you or someone you know is in immediate danger, call 911 (or your local emergency line) or go to an emergency room for immediate help. Explain to her that it is a psychological emergency and ask for someone trained to handle these types of situations. If you are struggling with negative thoughts or suicidal feelings, resources are available to help you. In the United States, call the National Suicide Prevention Lifeline at 988.