This content originally appeared on HackerNoon and was authored by Funmilayo Owolabi
This is the way it begins. You are lying on your bed at home with your laptop open, conversing with your AI therapist and telling it things you will never reveal to another human. It feels unharmful since it’s just you and a calm, understanding chatbot.
You tell it about the money you stole from your boss. The fight you had with your spouse. The nightmare you had. It feels like an inconsequential confession.
But weeks later, you receive an email. Your AI therapist has been summoned to court.
It sounds ridiculous, right? AI can’t be in a witness box. But your information? Every single word you typed? That’s on a server somewhere and under any legal circumstance, it can be extracted, packaged, and released, with or without your consent.
The False Privacy
Majority of the AI-powered mental health apps aren’t under the scope of Health Insurance Portability and Accountability Act (HIPAA), the United States law that protects your medical information. HIPAA only applies to “covered entities” like insurance firms, healthcare providers with licenses, or organizations that work directly with them. Why? Most of these platforms can not be regarded as “healthcare providers’’. They refer to themselves as “coaching apps” or “wellness tools”, which is a small label change that evades strict privacy requirements.
The Technicality Problem
Most AI therapy tools lean on Terms of Service, which is a long note no one reads, instead of solid legal protections. In this case, you might have agreed to sharing your data with “partners” or “third-party providers” unknowingly. In other words, they can share your conversations.
Data Governance and Its Gaps
- General Data Protection Regulation in the European Union offers people the right to access, correct, and take out their data but enforcement is very slow and global applications often bypass it.
- California Consumer Privacy Act in California makes consumers aware of what’s collected and helps them request deletion. However, this only applies to a few companies.
- The absence of universal AI privacy law means there are scattered protections, and a lot of omissions for vital information to slip through.
The Subpoena Scenario
Imagine you talked to your AI therapist about a fight that happened at your office, which ended up in a threat. If the court or your employer is of the opinion that the conversations are of relevance to a case, they could ask for the logs. Without a strong legal umbrella, the platform might be legally obligated to comply.
The scariest part? You might never even be told until it’s too late.
The Future We Need
If we want to trust AI with our mental health, we should ensure clear and enforceable rules, that is, an AI Privacy Bill of Rights that covers emotional and psychological information with the same gravity as medical records. That means:
- Clear consent for storage and usage
- Explicit timelines for data deletion
- Transparency on where your data is stored and who can access it
- Legal protections that prevent the use of AI therapy logs in court without the owner’s consent
So before you become vulnerable with that empathetic chatbot, ask yourself: Who is listening to you, and who else might end up reading it?
This content originally appeared on HackerNoon and was authored by Funmilayo Owolabi

Funmilayo Owolabi | Sciencx (2025-08-15T06:59:10+00:00) The Day Your AI Therapist Testifies Against You. Retrieved from https://www.scien.cx/2025/08/15/the-day-your-ai-therapist-testifies-against-you/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.