All News

OpenAI CEO Warns ChatGPT Lacks Therapy-Style Confidentiality

OpenAI CEO Sam Altman warned on Theo Von’s podcast that ChatGPT has no legal confidentiality like doctor-patient or attorney-client privilege. Users seeking therapy or sensitive advice could see their chats subpoenaed in court. OpenAI is appealing a New York Times lawsuit order to preserve user chats, highlighting the urgent need for AI privacy frameworks.

Published July 26, 2025 at 10:08 PM EDT in Artificial Intelligence (AI)

OpenAI CEO Sam Altman recently cautioned users on the This Past Weekend w/ Theo Von podcast against treating ChatGPT as a substitute for professional therapy or legal advice. He emphasized that the AI industry lacks a clear legal framework to grant the kind of confidentiality protections we expect from doctors, lawyers, and therapists.

The Privacy Gap in AI-Driven Therapy

Unlike meetings with licensed professionals, conversations with ChatGPT don’t enjoy doctor-patient privilege or attorney-client confidentiality. Altman pointed out that sensitive details shared during AI chats could be subject to legal discovery, leaving users exposed in lawsuits or regulatory investigations.

Legal Implications and Subpoena Risks

OpenAI is currently appealing a court order in its lawsuit with The New York Times that would force the company to preserve chats from hundreds of millions of ChatGPT users. If courts can compel data retention, AI providers could face broader demands from law enforcement, civil litigants, or other third parties seeking user transcripts.

Steps to Safeguard AI Conversations

  • Map data flows and storage locations to identify where sensitive AI interactions occur.
  • Encrypt chat logs both in transit and at rest using industry-standard protocols.
  • Implement clear consent mechanisms that inform users about potential legal exposure.
  • Engage legal and compliance teams to review AI privacy policies regularly.
  • Provide transparent disclosures and options for users to delete or opt out of data retention.

Building a Framework for AI Confidentiality

Establishing confidentiality for AI-driven conversations will require collaboration between policymakers, technologists, and legal experts. Organizations can pioneer best practices by integrating privacy-by-design principles, defining clear retention policies, and adopting ethical AI governance models that mirror existing professional privileges.

As AI adoption grows across industries, enterprises must anticipate privacy and compliance challenges in sensitive applications. QuarkyByte partners with leaders to architect robust AI governance frameworks, ensuring that cutting-edge solutions protect user data and align with emerging legal standards.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte helps organizations map AI data flows and embed privacy-by-design into conversational interfaces. Learn how we craft legal and technical frameworks that mirror professional confidentiality, safeguarding user trust and compliance.