OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm

# OpenAI Adds New Safety Feature for Users in Crisis OpenAI is launching a "Trusted Contact" safeguard that helps protect ChatGPT users who may be struggling with self-harm by allowing them to designate someone they trust to be notified if concerning conversations occur. Think of it as having a safety net built into the app—if you're talking to ChatGPT about harming yourself, the system can alert a friend or family member you've pre-approved to check on you. It's part of OpenAI's broader push to use AI responsibly when people are going through mental health crises.
The company is expanding its efforts to protect ChatGPT users in cases where conversations may turn to self-harm.
More from Latest News
Get new guides every week
Real AI income strategies, tool reviews, and plain-English news — free in your inbox.


