Skip to content

ChatGPT's Use as a Therapist May Lack Legal Confidentiality, Warns Sam Altman

ChatGPT's therapeutic use lacks legal confidentiality, as stated by OpenAI CEO Sam Altman in a July 25, 2025 declaration. This announcement, which sent ripples through the AI sector, emphasizes that ChatGPT isn't bound by legal confidentiality when employed for sensitive tasks such as therapy,...

ChatGPT's Use as a Therapist may Violate Legal Confidentiality, According to Sam Altman's Warning
ChatGPT's Use as a Therapist may Violate Legal Confidentiality, According to Sam Altman's Warning

In a significant development on July 25, 2025, OpenAI CEO Sam Altman issued a warning about the lack of legal confidentiality when using AI chatbots like ChatGPT for sensitive applications, such as therapy. This announcement comes amidst an ongoing legal battle between OpenAI and The New York Times.

The warning serves as a stark reminder for organizations considering AI in sensitive contexts, highlighting legal and reputational risks. Currently, conversations with AI chatbots do not have legal confidentiality or legal privilege protections analogous to those between a client and a lawyer, doctor, or therapist. This means that sensitive information shared with AI bots could potentially be subject to discovery or subpoena in legal proceedings and is not protected under doctor-patient or attorney-client confidentiality laws.

Altman openly acknowledged that the industry has not yet established legal or privacy frameworks granting such confidentiality for AI conversations. He contrasted AI interactions with traditional privileged communications, noting that users often share very personal information with chatbots, but unlike with human professionals, these interactions do not enjoy any legally recognized privacy. This raises significant privacy risks because conversations may be stored and could be accessed legally if required.

For businesses, this legal gap implies that using AI chatbots for sensitive applications—such as legal advice, mental health counseling, or confidential business strategy—carries risks of exposure. Sensitive data shared could be discoverable or subject to subpoenas, undermining client confidentiality and possibly breaching regulatory requirements around data privacy or professional standards. This lack of legal privilege could deter companies from adopting AI tools for critical confidential tasks or force them to implement additional safeguards.

The issue of confidentiality in AI interactions is a growing concern, especially in the healthcare and wellness space where many companies have implemented AI chatbots for mental health support. The revelations could limit the effectiveness of AI in mental health applications and slow the growth of a promising segment within the digital health market.

To build trust with users and comply with evolving regulations, companies must prioritize robust data governance frameworks and transparent communication around the limitations of AI confidentiality. By proactively addressing these challenges, businesses can position themselves to responsibly harness the transformative potential of AI while safeguarding user privacy and trust.

The July 25, 2025 developments also underscore the critical importance of data privacy and security in the age of AI. Businesses must invest in robust data governance frameworks and security measures to mitigate these risks. Furthermore, policymakers must develop comprehensive regulations around AI privacy and data security to protect users and businesses alike.

In conclusion, the lack of established legal confidentiality protections for sensitive information shared with AI chatbots impacts both users and businesses by exposing sensitive data to legal risks. This underscores the urgent need for developing legal and technical privacy frameworks for AI interactions. Companies that proactively address these issues will be best positioned to weather regulatory uncertainty and thrive in the AI-powered future.

  1. The lack of legal confidentiality for AI chatbots in sensitive applications like therapy or corporate strategy could expose businesses to legal risks, as sensitive information shared could potentially be subject to discovery or subpoena in legal proceedings.
  2. To safeguard user privacy and trust, businesses must prioritize robust data governance frameworks and transparent communication about the limitations of AI confidentiality.
  3. The July 25, 2025, warning from OpenAI CEO Sam Altman about this legal gap could deter companies from adopting AI tools for critical confidential tasks or force them to implement additional safeguards.
  4. In the healthcare and wellness space, where many companies have implemented AI chatbots for mental health support, the lack of legal confidentiality could limit the effectiveness of AI in mental health applications and slow the growth of the digital health market.
  5. For startups focusing on innovative AI-based models in sensitive areas like entrepreneurship and leadership, it is crucial to consider the legal and privacy risks associated with AI interactions to scale and grow responsibly within the AI-powered future.
  6. Policymakers must develop comprehensive regulations around AI privacy and data security to protect users and businesses alike, and in turn foster an environment that encourages investment in AI-powered businesses and startups.

Read also:

    Latest

    Discover

    Investigation

    Dongfeng Motor Corporation unveils its technological, branding, and international growth strategies, encompassing advancements in solid-state batteries and a full-size SUV jointly designed with Huawei Technologies.