Can You Sue an AI Therapist? The Malpractice Question Nobody's Answering
AI & TechnologyFebruary 7, 20268 min read

Can You Sue an AI Therapist? The Malpractice Question Nobody's Answering

Psychology Today is asking hard questions about AI therapy liability. If a chatbot gives harmful advice, who's responsible? The legal framework doesn't exist yet—and that's a problem.

If a human therapist gave you harmful advice that worsened your condition, you could sue for malpractice. But what if that advice came from an AI chatbot? Who do you sue—the app developer, the AI company, the tech platform, or no one at all?

Psychology Today recently published a piece titled "Suing Therapeutic AI Systems for Malpractice"—and it reveals a troubling gap in our legal system. As AI therapy apps multiply, the rules for holding them accountable barely exist.

The Legal Vacuum

Traditional medical malpractice law is built around a clear framework:

  • Duty of care — The provider had a responsibility to the patient
  • Breach of duty — The provider failed to meet accepted standards
  • Causation — The breach directly caused harm
  • Damages — The patient suffered measurable harm

This works when you're suing Dr. Smith, who has a license, follows professional standards, and carries malpractice insurance.

It doesn't work for AI therapy apps.

Who Has the "Duty of Care"?

When you use an AI therapy app, who's responsible for your wellbeing?

  • The company that built the app?
  • The AI model provider (OpenAI, Anthropic, etc.)?
  • The app store that distributed it?
  • No one—because you clicked "I agree" on terms of service that disclaim all liability?

Most AI therapy apps explicitly state they're "not medical advice" and "not a replacement for professional care." These disclaimers are designed precisely to avoid liability—but does that make them right?

What Are the "Standards"?

Medical malpractice requires showing the provider failed to meet accepted professional standards. But what are the standards for AI therapy?

There are no licensing requirements. No professional boards. No universally accepted best practices. The FDA hasn't issued clear guidance. Professional psychology associations are still figuring out what to recommend.

How do you prove an AI "failed to meet standards" when the standards don't exist?

Can You Prove Causation?

Mental health is complex. If someone's condition worsens after using an AI app, how do you prove the app caused the harm rather than:

  • The underlying condition progressing naturally
  • Life circumstances changing
  • Other treatments or lack thereof
  • The person's own choices

Establishing causation in mental health cases is already difficult. Adding AI to the mix makes it even murkier.

The Cases Are Coming

Despite these legal obstacles, lawsuits are starting to appear:

  • Character AI lawsuit — The family of a 14-year-old who died by suicide has sued, alleging the AI companion contributed to his death
  • Replika complaints — Multiple users have reported harmful experiences, though major lawsuits are still pending
  • General AI cases — Courts are grappling with AI liability in other contexts (copyright, defamation) that will set precedents

The legal system is being forced to adapt—but it's happening slowly, case by case, with inconsistent results.

What Should the Rules Be?

Psychology Today's piece and legal scholars are proposing frameworks:

1. Product Liability

Treat AI therapy apps like products. If a product is defective and causes harm, the manufacturer is liable—regardless of warnings and disclaimers.

This would mean AI developers couldn't hide behind "not medical advice" disclaimers if their product was marketed for mental health support.

2. Professional Standards

Create actual standards for AI mental health tools. Require that they:

  • Recognize crisis situations and escalate appropriately
  • Don't provide advice outside their competence
  • Clearly disclose limitations
  • Maintain appropriate data privacy

Apps that fail to meet standards could be held liable for resulting harm.

3. Informed Consent

Require AI therapy apps to obtain genuine informed consent—not just terms of service nobody reads, but clear communication about:

  • What the AI can and cannot do
  • Known risks and limitations
  • When human help is needed
  • What data is collected and how it's used

4. Insurance Requirements

Require AI therapy apps to carry liability insurance—just like human therapists. This would create market pressure for safety, as insurers would demand safeguards before providing coverage.

What This Means for Users Today

Right now, if you're harmed by an AI therapy app, your legal options are limited. This isn't fair—but it's reality.

To protect yourself:

Choose Apps That Take Safety Seriously

Look for apps that:

  • Have clear crisis protocols (not just disclaimers)
  • Are transparent about limitations
  • Have real humans involved in safety review
  • Don't make grandiose claims about replacing therapy

Treat AI as Supplement, Not Replacement

AI therapy apps work best as a complement to—not replacement for—professional care. If you're dealing with serious symptoms, see a human.

Document Your Experience

If you have a negative experience with an AI therapy app, document it. Screenshots, dates, descriptions of harm. This may be useful if legal frameworks improve or if you pursue complaints through other channels.

Report Harmful Apps

Both Apple and Google allow you to report apps that cause harm. Regulatory agencies are starting to pay attention. Your reports matter.

Where We Stand

At Stella, we think about liability seriously:

  • We're honest about what we are. Stella is not a therapist and doesn't pretend to be.
  • Crisis protocols are built in. When conversations suggest serious risk, Stella redirects to professional resources.
  • We don't make medical claims. We're emotional support—not treatment.
  • We'd welcome clear standards. The industry needs them. We'd rather compete on quality than on who has the best legal disclaimers.

The Bottom Line

If an AI therapy app harms you today, your legal options are murky at best. That's a problem the legal system and regulators need to fix.

In the meantime, choose apps that take safety seriously—not just in their marketing, but in how they're actually built. The disclaimers tell you what a company is trying to avoid. The features tell you what they actually care about.

Stella is built with safety as a core value, not an afterthought. See our approach.


This article is not legal advice. If you believe you've been harmed by an AI therapy app, consult with a licensed attorney in your jurisdiction.

Struggling with anxiety? Stella remembers your triggers so you don't spiral the same way twice.

Get Early Access

Sources & References

Ready for anxiety support that remembers you?

Get Early Access