KAMPALA, UGANDA – It feels safe, until it isn’t.
You’re sitting alone in your room. You’ve had a fight with your partner, or maybe you’re spiraling into anxiety again. It’s 2 a.m. and you can’t afford a therapist—or worse, you’re afraid to ask for help. So, like millions of others, you turn to ChatGPT.
“Why am I like this?”
“Should I stay in this relationship?”
“What’s the point of going on?”
In a world where therapy is costly, slow to access, or stigmatized, people are confiding in AI—deeply, urgently, and often without realizing who (or what) they’re actually talking to.
And now, OpenAI’s CEO Sam Altman is telling you to stop.
The Warning
“People talk about the most personal sh** in their lives to ChatGPT,” Altman said last week in a candid interview on This Past Weekend with comedian Theo Von, according to The Deep View, an AI newsletter tracking these trends, mental health queries. “Young people, especially, use it as a therapist, a life coach… And right now, if you talk to a therapist or a lawyer or a doctor, there’s confidentiality. But we haven’t figured that out yet for when you talk to ChatGPT,” Altman said.
That blunt confession struck a nerve—and it should. AI may feel like a safe place, but legally and technically, it isn’t one. Unlike conversations with a licensed therapist, nothing you tell ChatGPT is protected under doctor-patient confidentiality, attorney-client privilege, or HIPAA privacy laws.
And thanks to a recent court order in the New York Times’s copyright lawsuit against OpenAI, those private chats could now be preserved—indefinitely.
The Lawsuit That Changed Everything
In December 2023, The New York Times sued OpenAI for allegedly training ChatGPT on copyrighted news articles. As part of discovery, a judge ordered OpenAI to preserve all user logs, including deleted chats. That means if you typed out your deepest fears and hit “clear history,” it may still exist—somewhere.
OpenAI is fighting that order. But for now, it means your AI confessions could, under certain circumstances, be used as evidence in future legal proceedings. Think: custody disputes, divorce filings, employment lawsuits. Anything that touches your mental state, your relationships, or your decision-making could come under scrutiny.
There’s no legal shield protecting what you’ve told ChatGPT—not yet.
Who’s Most at Risk? The People Who Rely on It the Most
The implications are starkest for teens and young adults, many of whom turn to ChatGPT as a substitute for therapy. According to The Deep View, topics like how to cope with depression, anxiety, or self-harm, are among the most common inputs.
One 17-year-old in Atlanta said, “It’s easier than talking to a real person. ChatGPT doesn’t judge. I can cry and type everything out, and it helps me feel less alone.”
But what happens when that emotional lifeline becomes a legal liability?
Two Systems, Two Levels of Privacy
It’s worth noting that not all users are equally exposed. ChatGPT Enterprise and ChatGPT Edu (OpenAI’s products for businesses and schools) are exempt from the court order, according to The Deep View. Their user data is encrypted and not stored long-term. For individual users—especially those using the free version-that protection doesn’t apply.
This has created what privacy advocates are calling a “two-tier system of trust”: one where corporate users enjoy robust safeguards, while ordinary individuals—especially the most vulnerable—remain exposed.
What This Tells Us About AI’s Human Role
This moment is a reckoning—not just for OpenAI, but for the world’s fast-growing relationship with AI. Chatbots like ChatGPT aren’t just tools anymore. They’re becoming confidants, companions, and emotional outlets.
That’s not inherently bad. In fact, for many who feel isolated or marginalized, AI can offer a form of comfort and validation. But it also means these tools must be designed and regulated with care—because their emotional impact is real, and the consequences of a breach of trust can be devastating.
Asked to chime in, ChatGPT said, “As ChatGPT, I don’t have memory in the free version. I don’t know who you are, and I don’t retain what you’ve told me after the chat ends. But if legal proceedings force data logs to be preserved—and made accessible—that raises uncomfortable questions. Who owns your words? Who gets to read them? And under what circumstances?”
Until there is a legal equivalent of doctor-patient confidentiality for AI tools—call it “AI privilege”—users should treat their conversations with extreme caution. That doesn’t mean abandoning the tech, but it does mean recognizing what it is: not a therapist, not a priest, and definitely not above the law.
What You Can Do Now
Don’t share anything in ChatGPT that you wouldn’t want read in court.
If you need mental health help, reach out to a licensed professional.
Be aware of your settings, especially data sharing and history retention.
Push for stronger digital privacy rights. The law is lagging behind the tech.
Conclusion: A Call for Human-Centered AI
This is a pivotal moment in how humans relate to machines. ChatGPT—and tools like it—are here to stay. They can do immense good, especially in helping people feel heard and supported. But the trust they’re being asked to hold must be matched with accountability.
Until then, the safest therapy might still be a human one.