OpenAI has introduced a new optional safety feature for ChatGPT designed to enhance user well-being. Adult users can now assign a ‘Trusted Contact’—such as a friend, family member, or caregiver—who will be notified if OpenAI’s systems detect potential discussions about mental health crises, including self-harm or suicide.

How the ‘Trusted Contact’ Feature Works

The feature operates by monitoring ChatGPT conversations for language or topics that may indicate a user is in distress. If such concerns are detected, OpenAI will send an alert to the designated Trusted Contact, enabling them to reach out and offer support. This initiative is part of OpenAI’s broader efforts to prioritize user safety and mental health.

"Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference," OpenAI stated in its announcement. "It offers another layer of support alongside the localized helplines already available …"

Purpose and Implementation

The feature is not intended to replace professional mental health resources but to complement them. OpenAI emphasizes that localized crisis helplines remain the primary support system for individuals in need. The Trusted Contact feature is an additional safeguard to ensure timely intervention when users may be at risk.

Key Details:

  • Availability: Optional feature for adult ChatGPT users.
  • Notification Criteria: Alerts sent if OpenAI detects discussions about self-harm, suicide, or other safety concerns.
  • Support Role: Designed to work alongside existing mental health helplines, not replace them.

Why This Feature Matters

Mental health crises often require immediate attention, and timely support from trusted individuals can be critical. By integrating this feature, OpenAI aims to bridge the gap between automated systems and human intervention, ensuring users receive the help they need when they need it most.

Source: The Verge