OpenAI’s Trusted Contact is designed to make ChatGPT do something a chatbot cannot do alone: involve a real person the user has chosen in advance. The opt-in feature lets adult ChatGPT users name a trusted contact, such as a friend or family member, who may be alerted if a conversation suggests self-harm or suicide risk [1].
What Trusted Contact does
Trusted Contact is a safety setting for adult ChatGPT users. After a user opts in and designates someone, ChatGPT may encourage the user to reach out to that person if self-harm ideation appears in a conversation [1].
The feature can also send the chosen contact an automated alert encouraging them to check in with the user [1]. That makes it different from a standard crisis-resource message: it can add a human support layer involving someone the user selected ahead of time [
1][
4].




