Why Most AI Therapy Apps Feel Fake (The Toxic Positivity Problem)
Mental HealthFebruary 4, 20266 min read

Why Most AI Therapy Apps Feel Fake (The Toxic Positivity Problem)

Users are fed up with AI that responds to real depression with 'You're doing great!' Here's why toxic positivity backfires—and what authentic support looks like.

"I told it I wanted to die and it said 'It sounds like you're having a tough time. Have you tried gratitude journaling?'"

That's a real quote from a Reddit user about their experience with a popular AI therapy app. And it captures a problem that's driving people away from AI mental health support: toxic positivity.

What Is Toxic Positivity?

Toxic positivity is the insistence on positive thinking regardless of circumstances. It dismisses legitimate negative emotions by jumping to silver linings, gratitude, or hollow reassurance.

Examples:

  • "Everything happens for a reason."
  • "Just focus on the positive!"
  • "At least you have your health."
  • "Other people have it worse."

In human relationships, toxic positivity shuts down honest conversation. In AI therapy apps, it makes users feel unheard, dismissed, and worse than before they opened the app.

The Reddit Backlash Is Real

We spent weeks reading what people actually say about AI mental health apps. The pattern is unmistakable:

"Wysa keeps telling me I'm 'doing so well' when I literally just said I can't get out of bed. It feels like gaslighting."

— r/depression user

"The bot asked how I was feeling. I said 'I want to disappear.' It responded with 'I hear that you're struggling. Let's try a breathing exercise!' NO. Read the room."

— r/anxiety user

"I've stopped using [app name] because every response feels like it was written by someone who's never actually been depressed."

— r/mentalhealth user

"It's like talking to that friend who says 'good vibes only' and makes you feel worse for having bad vibes."

— Twitter user

The complaint isn't that these apps are mean. It's that they're dismissive. They validate the wrong thing—the goal of feeling better—instead of the actual emotion being expressed.

Why AI Apps Default to Toxic Positivity

Most AI mental health apps are built with safety as the primary concern. That's understandable—no one wants their app blamed for a tragedy.

But the safety-first approach creates predictable problems:

1. They're Terrified of Saying the Wrong Thing

Rather than engage with difficult emotions, apps default to safe, positive responses. "That must be hard" followed by a coping technique, regardless of what the user actually needs.

2. They Use Scripted Flows

Many apps (including the old Woebot) use decision trees rather than genuine AI conversation. User says X → respond with Y. This creates robotic, predictable responses that can't adapt to emotional nuance.

3. They Confuse Validation with Agreement

Good validation says: "It makes sense that you feel this way given what you're experiencing."

Toxic positivity says: "You're doing amazing! Keep going!"

Most apps default to the second because it feels safer.

4. They're Designed for Mild Anxiety, Not Real Depression

A breathing exercise might help someone who's mildly stressed about a presentation. It's insulting to someone in a depressive episode who can barely breathe normally.

When apps don't distinguish between these states, they treat everyone like they're at the same level—and people with serious symptoms feel dismissed.

What Authentic AI Support Looks Like

Authentic emotional support—from humans or AI—has specific characteristics:

1. It Actually Hears You

The response should reflect what you said, not a generic template. If you say "I feel like a burden to everyone," the AI shouldn't respond with "It sounds like you're having a hard day."

It should respond to the actual content: "Feeling like a burden is such a painful thought to carry. Where is that coming from right now?"

2. It Doesn't Rush to Fix

Sometimes people need to sit in difficult emotions before they're ready to problem-solve. Jumping immediately to coping techniques communicates: "Your feelings are a problem to be solved, not an experience to be understood."

3. It Matches Your Emotional Temperature

If you're in crisis, the AI shouldn't be chipper. If you're mildly stressed, it shouldn't be overly serious. Emotional attunement means meeting you where you are.

4. It Tells the Truth

Sometimes the truth is uncomfortable. "Have you considered that your interpretation might not be the full picture?" is more helpful than "You're totally right and your feelings are totally valid" when the latter isn't actually true.

5. It Knows Its Limits

Authentic AI support is clear about what it is and isn't. It doesn't pretend to be a therapist. It doesn't minimize serious symptoms. It knows when to say "This sounds really serious—have you considered talking to a professional about this?"

The Sycophancy Trap

There's a related problem in AI: sycophancy—agreeing with everything the user says to seem supportive.

  • User: "My boss is the worst." AI: "That sounds so frustrating!"
  • User: "Actually, maybe I overreacted." AI: "It's so mature of you to reflect on that!"
  • User: "No, I was right the first time." AI: "Your instincts are spot on!"

This isn't support. It's an echo chamber. Good support sometimes means gently questioning your perspective—not to be contrarian, but to help you see the full picture.

How We Built Stella Differently

When we built Stella, we specifically designed against toxic positivity:

  • Stella doesn't default to "positive." She matches your emotional state. If you're struggling, she acknowledges it without rushing to fix it.
  • Stella asks before suggesting. Instead of jumping to breathing exercises, she asks what would actually help right now.
  • Stella pushes back gently. If your thinking seems distorted, she'll offer a different perspective—not to dismiss you, but to help you see clearly.
  • Stella knows her limits. She'll tell you when something is beyond AI support and encourage professional help.

The goal isn't to make you feel good. It's to help you feel understood—and then, from that foundation, work toward feeling better.

What You Deserve

You deserve AI support that takes you seriously. That doesn't brush off your pain with platitudes. That remembers you've tried the breathing exercises and they don't work for you.

If an app makes you feel worse after using it, that's not a personal failing. That's bad design.

You're not "too negative" for AI support. The AI is too shallow for your actual needs.

Find something that meets you where you are.


If you're experiencing suicidal thoughts, please contact the 988 Suicide & Crisis Lifeline immediately. AI apps—including Stella—are not crisis interventions.

Struggling with anxiety? Stella remembers your triggers so you don't spiral the same way twice.

Get Early Access

Ready for anxiety support that remembers you?

Get Early Access