OpenAI CEO Warns ChatGPT Conversations Can Be Used as Legal Evidence


Last Updated on 1 month by Ashley Michael

OpenAI CEO Sam Altman has warned users that conversations with ChatGPT lack legal confidentiality protections and could be used as evidence in civil or criminal cases.

Speaking on comedian Theo Von’s podcast “This Past Weekend,” Altman revealed that ChatGPT interactions do not have the same privacy protections as conversations with doctors, lawyers, or therapists.

“People talk about the most personal details in their lives to ChatGPT,” Altman said during the July 23 episode. “Young people especially use it as a therapist, a life coach for relationship problems.”

Unlike protected communications with licensed professionals, ChatGPT conversations have no legal privilege. OpenAI could be legally compelled to produce chat logs in lawsuits or criminal investigations.

“If you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, we could be required to produce that, and I think that’s very screwed up,” Altman stated.

The warning comes as OpenAI faces a court order requiring the company to preserve all ChatGPT user data, including deleted conversations. U.S. Magistrate Judge Ona T. Wang issued the order on May 13, 2025, as part of a copyright lawsuit filed by The New York Times.

The order was upheld by District Judge Sidney Stein on June 26. It applies to users of ChatGPT Free, Plus, Pro, and Team accounts. Enterprise and educational users are exempt from the data retention requirement.

Under normal circumstances, deleted ChatGPT conversations are removed within 30 days. The court order has suspended this practice indefinitely, meaning conversations users believe were deleted are being retained.

Privacy experts warn that ChatGPT conversations could be subpoenaed in various legal situations, including contract disputes, harassment claims, intellectual property cases, and criminal investigations.

Unlike encrypted messaging services, OpenAI can read every ChatGPT conversation. The company uses chat data for model training and monitors conversations for misuse.

Altman advocated for establishing “AI privilege” that would protect user conversations with chatbots similarly to communications with human professionals. However, no such legal framework currently exists in any jurisdiction.

Users are advised to treat AI chat conversations like emails or text messages that can be subpoenaed. Privacy experts recommend avoiding sharing sensitive or legally compromising information with AI chatbots.


More News