Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

OpenAI is launching an optional security feature for ChatGPT that allows adult users to designate an emergency contact for mental health and safety concerns. Friends, family members or caregivers designated as a “trusted contact” will be notified if OpenAI detects that someone may have discussed topics such as self-harm or suicide with the chatbot.
“Trusted Contact is designed around a simple, expert-verified premise: when someone is in crisis, reaching out to someone they know and trust can make a meaningful difference,” OpenAI said in its announcement. “It provides another layer of support alongside Local helplines are already available In ChatGPT.”
A reliable calling feature is the subscription. Any adult ChatGPT user can enable it by adding the contact details of an adult (18+ globally or 19+ in South Korea) in their ChatGPT account settings. Your trusted contact must accept the invitation within a week of receiving the request. Users can remove or edit their chosen contact in settings, and a trusted contact can also choose to remove themselves at any time.
OpenAI says notification is “intentionally limited” and will not share chat details or texts with the trusted contact. If OpenAI’s automated systems detect that a user is talking about self-harm, ChatGPT will then encourage the user to reach out to a trusted contact for help, and let them know that the contact may be notified. A “small team of specially trained people” will then review the situation, according to OpenAI, and ChatGPT will send a brief email, text message, or in-app ChatGPT notification to the trusted contact if the conversation is determined to indicate a serious safety concern.
This builds on the emergency calling feature that was introduced alongside Parental controls in ChatGPT In September, after a 16-year-old committed suicide months after trusting ChatGPT. Meta also introduced a similar feature that alerts parents if their kids “frequently” search for it Self harm topics on instagram.